00:00:00.001 Started by upstream project "autotest-per-patch" build number 132832 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.128 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.431 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.443 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.456 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.456 > git config core.sparsecheckout # timeout=10 00:00:04.467 > git read-tree -mu HEAD # timeout=10 00:00:04.483 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.500 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.500 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.620 [Pipeline] Start of Pipeline 00:00:04.630 [Pipeline] library 00:00:04.632 Loading library shm_lib@master 00:00:04.632 Library shm_lib@master is cached. Copying from home. 00:00:04.644 [Pipeline] node 00:00:04.657 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.659 [Pipeline] { 00:00:04.669 [Pipeline] catchError 00:00:04.671 [Pipeline] { 00:00:04.684 [Pipeline] wrap 00:00:04.693 [Pipeline] { 00:00:04.701 [Pipeline] stage 00:00:04.703 [Pipeline] { (Prologue) 00:00:04.720 [Pipeline] echo 00:00:04.722 Node: VM-host-SM9 00:00:04.728 [Pipeline] cleanWs 00:00:04.738 [WS-CLEANUP] Deleting project workspace... 00:00:04.738 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.744 [WS-CLEANUP] done 00:00:04.965 [Pipeline] setCustomBuildProperty 00:00:05.035 [Pipeline] httpRequest 00:00:05.404 [Pipeline] echo 00:00:05.405 Sorcerer 10.211.164.112 is alive 00:00:05.414 [Pipeline] retry 00:00:05.415 [Pipeline] { 00:00:05.427 [Pipeline] httpRequest 00:00:05.431 HttpMethod: GET 00:00:05.432 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.432 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.434 Response Code: HTTP/1.1 200 OK 00:00:05.435 Success: Status code 200 is in the accepted range: 200,404 00:00:05.435 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.019 [Pipeline] } 00:00:06.031 [Pipeline] // retry 00:00:06.036 [Pipeline] sh 00:00:06.313 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.326 [Pipeline] httpRequest 00:00:07.432 [Pipeline] echo 00:00:07.433 Sorcerer 10.211.164.112 is alive 00:00:07.442 [Pipeline] retry 00:00:07.444 [Pipeline] { 00:00:07.458 [Pipeline] httpRequest 00:00:07.462 HttpMethod: GET 00:00:07.463 URL: http://10.211.164.112/packages/spdk_e576aacafae0a7d34c9eefcd66f049c5a6213081.tar.gz 00:00:07.463 Sending request to url: http://10.211.164.112/packages/spdk_e576aacafae0a7d34c9eefcd66f049c5a6213081.tar.gz 00:00:07.476 Response Code: HTTP/1.1 200 OK 00:00:07.476 Success: Status code 200 is in the accepted range: 200,404 00:00:07.477 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e576aacafae0a7d34c9eefcd66f049c5a6213081.tar.gz 00:01:42.685 [Pipeline] } 00:01:42.703 [Pipeline] // retry 00:01:42.711 [Pipeline] sh 00:01:42.992 + tar --no-same-owner -xf spdk_e576aacafae0a7d34c9eefcd66f049c5a6213081.tar.gz 00:01:46.291 [Pipeline] sh 00:01:46.573 + git -C spdk log --oneline -n5 00:01:46.573 e576aacaf build: use VERSION file for storing version 00:01:46.573 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:46.573 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:46.573 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:46.573 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:46.591 [Pipeline] writeFile 00:01:46.605 [Pipeline] sh 00:01:46.886 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.898 [Pipeline] sh 00:01:47.179 + cat autorun-spdk.conf 00:01:47.179 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.179 SPDK_TEST_NVMF=1 00:01:47.179 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.179 SPDK_TEST_URING=1 00:01:47.179 SPDK_TEST_USDT=1 00:01:47.179 SPDK_RUN_UBSAN=1 00:01:47.179 NET_TYPE=virt 00:01:47.179 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.186 RUN_NIGHTLY=0 00:01:47.188 [Pipeline] } 00:01:47.201 [Pipeline] // stage 00:01:47.215 [Pipeline] stage 00:01:47.218 [Pipeline] { (Run VM) 00:01:47.230 [Pipeline] sh 00:01:47.511 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:47.511 + echo 'Start stage prepare_nvme.sh' 00:01:47.511 Start stage prepare_nvme.sh 00:01:47.511 + [[ -n 0 ]] 00:01:47.511 + disk_prefix=ex0 00:01:47.511 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:47.511 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:47.511 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:47.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.511 ++ SPDK_TEST_NVMF=1 00:01:47.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.511 ++ SPDK_TEST_URING=1 00:01:47.511 ++ SPDK_TEST_USDT=1 00:01:47.511 ++ SPDK_RUN_UBSAN=1 00:01:47.511 ++ NET_TYPE=virt 00:01:47.511 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.511 ++ RUN_NIGHTLY=0 00:01:47.511 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:47.511 + nvme_files=() 00:01:47.511 + declare -A nvme_files 00:01:47.511 + backend_dir=/var/lib/libvirt/images/backends 00:01:47.511 + nvme_files['nvme.img']=5G 00:01:47.511 + nvme_files['nvme-cmb.img']=5G 00:01:47.511 + nvme_files['nvme-multi0.img']=4G 00:01:47.511 + nvme_files['nvme-multi1.img']=4G 00:01:47.511 + nvme_files['nvme-multi2.img']=4G 00:01:47.511 + nvme_files['nvme-openstack.img']=8G 00:01:47.511 + nvme_files['nvme-zns.img']=5G 00:01:47.511 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:47.511 + (( SPDK_TEST_FTL == 1 )) 00:01:47.511 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:47.511 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.511 + for nvme in "${!nvme_files[@]}" 00:01:47.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:47.511 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.770 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:47.770 + echo 'End stage prepare_nvme.sh' 00:01:47.770 End stage prepare_nvme.sh 00:01:47.782 [Pipeline] sh 00:01:48.062 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:48.063 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:48.063 00:01:48.063 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:48.063 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:48.063 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:48.063 HELP=0 00:01:48.063 DRY_RUN=0 00:01:48.063 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:48.063 NVME_DISKS_TYPE=nvme,nvme, 00:01:48.063 NVME_AUTO_CREATE=0 00:01:48.063 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:48.063 NVME_CMB=,, 00:01:48.063 NVME_PMR=,, 00:01:48.063 NVME_ZNS=,, 00:01:48.063 NVME_MS=,, 00:01:48.063 NVME_FDP=,, 00:01:48.063 SPDK_VAGRANT_DISTRO=fedora39 00:01:48.063 SPDK_VAGRANT_VMCPU=10 00:01:48.063 SPDK_VAGRANT_VMRAM=12288 00:01:48.063 SPDK_VAGRANT_PROVIDER=libvirt 00:01:48.063 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:48.063 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:48.063 SPDK_OPENSTACK_NETWORK=0 00:01:48.063 VAGRANT_PACKAGE_BOX=0 00:01:48.063 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:48.063 FORCE_DISTRO=true 00:01:48.063 VAGRANT_BOX_VERSION= 00:01:48.063 EXTRA_VAGRANTFILES= 00:01:48.063 NIC_MODEL=e1000 00:01:48.063 00:01:48.063 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:48.063 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:51.351 Bringing machine 'default' up with 'libvirt' provider... 00:01:51.609 ==> default: Creating image (snapshot of base box volume). 00:01:51.868 ==> default: Creating domain with the following settings... 00:01:51.868 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733839636_d38d68a566978fbf281c 00:01:51.868 ==> default: -- Domain type: kvm 00:01:51.868 ==> default: -- Cpus: 10 00:01:51.868 ==> default: -- Feature: acpi 00:01:51.868 ==> default: -- Feature: apic 00:01:51.868 ==> default: -- Feature: pae 00:01:51.868 ==> default: -- Memory: 12288M 00:01:51.868 ==> default: -- Memory Backing: hugepages: 00:01:51.868 ==> default: -- Management MAC: 00:01:51.868 ==> default: -- Loader: 00:01:51.868 ==> default: -- Nvram: 00:01:51.868 ==> default: -- Base box: spdk/fedora39 00:01:51.868 ==> default: -- Storage pool: default 00:01:51.868 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733839636_d38d68a566978fbf281c.img (20G) 00:01:51.868 ==> default: -- Volume Cache: default 00:01:51.868 ==> default: -- Kernel: 00:01:51.868 ==> default: -- Initrd: 00:01:51.868 ==> default: -- Graphics Type: vnc 00:01:51.868 ==> default: -- Graphics Port: -1 00:01:51.868 ==> default: -- Graphics IP: 127.0.0.1 00:01:51.868 ==> default: -- Graphics Password: Not defined 00:01:51.868 ==> default: -- Video Type: cirrus 00:01:51.868 ==> default: -- Video VRAM: 9216 00:01:51.868 ==> default: -- Sound Type: 00:01:51.868 ==> default: -- Keymap: en-us 00:01:51.868 ==> default: -- TPM Path: 00:01:51.868 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:51.868 ==> default: -- Command line args: 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:51.868 ==> default: -> value=-drive, 00:01:51.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:51.868 ==> default: -> value=-drive, 00:01:51.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.868 ==> default: -> value=-drive, 00:01:51.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.868 ==> default: -> value=-drive, 00:01:51.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:51.868 ==> default: -> value=-device, 00:01:51.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.868 ==> default: Creating shared folders metadata... 00:01:51.868 ==> default: Starting domain. 00:01:53.248 ==> default: Waiting for domain to get an IP address... 00:02:11.336 ==> default: Waiting for SSH to become available... 00:02:11.336 ==> default: Configuring and enabling network interfaces... 00:02:13.871 default: SSH address: 192.168.121.29:22 00:02:13.871 default: SSH username: vagrant 00:02:13.871 default: SSH auth method: private key 00:02:16.405 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:24.519 ==> default: Mounting SSHFS shared folder... 00:02:25.085 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.085 ==> default: Checking Mount.. 00:02:26.459 ==> default: Folder Successfully Mounted! 00:02:26.459 ==> default: Running provisioner: file... 00:02:27.026 default: ~/.gitconfig => .gitconfig 00:02:27.592 00:02:27.592 SUCCESS! 00:02:27.592 00:02:27.593 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:27.593 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.593 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:27.593 00:02:27.601 [Pipeline] } 00:02:27.616 [Pipeline] // stage 00:02:27.626 [Pipeline] dir 00:02:27.626 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:27.628 [Pipeline] { 00:02:27.640 [Pipeline] catchError 00:02:27.642 [Pipeline] { 00:02:27.655 [Pipeline] sh 00:02:27.934 + vagrant ssh-config --host vagrant 00:02:27.934 + sed -ne /^Host/,$p 00:02:27.934 + tee ssh_conf 00:02:31.222 Host vagrant 00:02:31.222 HostName 192.168.121.29 00:02:31.222 User vagrant 00:02:31.222 Port 22 00:02:31.222 UserKnownHostsFile /dev/null 00:02:31.222 StrictHostKeyChecking no 00:02:31.222 PasswordAuthentication no 00:02:31.222 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:31.222 IdentitiesOnly yes 00:02:31.222 LogLevel FATAL 00:02:31.222 ForwardAgent yes 00:02:31.222 ForwardX11 yes 00:02:31.222 00:02:31.236 [Pipeline] withEnv 00:02:31.239 [Pipeline] { 00:02:31.253 [Pipeline] sh 00:02:31.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.535 source /etc/os-release 00:02:31.535 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.535 # Minimal, systemd-like check. 00:02:31.535 if [[ -e /.dockerenv ]]; then 00:02:31.535 # Clear garbage from the node's name: 00:02:31.535 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.535 # $HOSTNAME is the actual container id 00:02:31.535 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.535 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.535 # We can assume this is a mount from a host where container is running, 00:02:31.535 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.535 container="$(< /etc/hostname) ($agent)" 00:02:31.535 else 00:02:31.535 # Fallback 00:02:31.535 container=$agent 00:02:31.535 fi 00:02:31.535 fi 00:02:31.535 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.535 00:02:31.559 [Pipeline] } 00:02:31.575 [Pipeline] // withEnv 00:02:31.583 [Pipeline] setCustomBuildProperty 00:02:31.597 [Pipeline] stage 00:02:31.599 [Pipeline] { (Tests) 00:02:31.616 [Pipeline] sh 00:02:31.894 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.907 [Pipeline] sh 00:02:32.185 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:32.456 [Pipeline] timeout 00:02:32.456 Timeout set to expire in 1 hr 0 min 00:02:32.458 [Pipeline] { 00:02:32.471 [Pipeline] sh 00:02:32.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:33.316 HEAD is now at e576aacaf build: use VERSION file for storing version 00:02:33.328 [Pipeline] sh 00:02:33.606 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:33.877 [Pipeline] sh 00:02:34.151 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:34.423 [Pipeline] sh 00:02:34.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:34.959 ++ readlink -f spdk_repo 00:02:34.959 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.959 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.959 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.959 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.959 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.959 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.959 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.959 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:34.959 + cd /home/vagrant/spdk_repo 00:02:34.959 + source /etc/os-release 00:02:34.959 ++ NAME='Fedora Linux' 00:02:34.959 ++ VERSION='39 (Cloud Edition)' 00:02:34.959 ++ ID=fedora 00:02:34.959 ++ VERSION_ID=39 00:02:34.959 ++ VERSION_CODENAME= 00:02:34.959 ++ PLATFORM_ID=platform:f39 00:02:34.959 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.959 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.959 ++ LOGO=fedora-logo-icon 00:02:34.959 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.959 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.959 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.959 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.959 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.959 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.959 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.959 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.959 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.959 ++ SUPPORT_END=2024-11-12 00:02:34.959 ++ VARIANT='Cloud Edition' 00:02:34.959 ++ VARIANT_ID=cloud 00:02:34.959 + uname -a 00:02:34.960 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.960 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:35.526 Hugepages 00:02:35.526 node hugesize free / total 00:02:35.526 node0 1048576kB 0 / 0 00:02:35.526 node0 2048kB 0 / 0 00:02:35.526 00:02:35.526 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.526 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.526 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.526 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:35.526 + rm -f /tmp/spdk-ld-path 00:02:35.526 + source autorun-spdk.conf 00:02:35.526 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.526 ++ SPDK_TEST_NVMF=1 00:02:35.526 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.526 ++ SPDK_TEST_URING=1 00:02:35.526 ++ SPDK_TEST_USDT=1 00:02:35.526 ++ SPDK_RUN_UBSAN=1 00:02:35.526 ++ NET_TYPE=virt 00:02:35.526 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.526 ++ RUN_NIGHTLY=0 00:02:35.526 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.526 + [[ -n '' ]] 00:02:35.526 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.526 + for M in /var/spdk/build-*-manifest.txt 00:02:35.526 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.526 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.526 + for M in /var/spdk/build-*-manifest.txt 00:02:35.526 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.526 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.526 + for M in /var/spdk/build-*-manifest.txt 00:02:35.526 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.526 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.526 ++ uname 00:02:35.526 + [[ Linux == \L\i\n\u\x ]] 00:02:35.526 + sudo dmesg -T 00:02:35.526 + sudo dmesg --clear 00:02:35.526 + dmesg_pid=5262 00:02:35.526 + sudo dmesg -Tw 00:02:35.526 + [[ Fedora Linux == FreeBSD ]] 00:02:35.526 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.526 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.526 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.526 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.526 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.526 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.526 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.526 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.526 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.526 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.526 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.526 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.526 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.526 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.526 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.526 14:08:00 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.526 14:08:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.526 14:08:00 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:35.526 14:08:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.526 14:08:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.784 14:08:00 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.784 14:08:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.784 14:08:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.784 14:08:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.784 14:08:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.784 14:08:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.784 14:08:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.784 14:08:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.784 14:08:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.784 14:08:00 -- paths/export.sh@5 -- $ export PATH 00:02:35.785 14:08:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.785 14:08:00 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.785 14:08:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:35.785 14:08:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733839680.XXXXXX 00:02:35.785 14:08:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733839680.rZhGOn 00:02:35.785 14:08:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:35.785 14:08:00 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:35.785 14:08:00 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:35.785 14:08:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.785 14:08:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.785 14:08:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:35.785 14:08:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:35.785 14:08:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.785 14:08:00 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:35.785 14:08:00 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:35.785 14:08:00 -- pm/common@17 -- $ local monitor 00:02:35.785 14:08:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.785 14:08:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.785 14:08:00 -- pm/common@25 -- $ sleep 1 00:02:35.785 14:08:00 -- pm/common@21 -- $ date +%s 00:02:35.785 14:08:00 -- pm/common@21 -- $ date +%s 00:02:35.785 14:08:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733839680 00:02:35.785 14:08:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733839680 00:02:35.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733839680_collect-vmstat.pm.log 00:02:35.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733839680_collect-cpu-load.pm.log 00:02:36.719 14:08:01 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:36.719 14:08:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.719 14:08:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.719 14:08:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.719 14:08:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.719 Tue Dec 10 02:08:01 PM UTC 2024 00:02:36.719 14:08:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.719 v25.01-pre-304-ge576aacaf 00:02:36.719 14:08:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:36.719 14:08:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.719 14:08:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.719 14:08:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.719 14:08:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.719 14:08:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.719 ************************************ 00:02:36.719 START TEST ubsan 00:02:36.719 ************************************ 00:02:36.720 using ubsan 00:02:36.720 14:08:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:36.720 00:02:36.720 real 0m0.000s 00:02:36.720 user 0m0.000s 00:02:36.720 sys 0m0.000s 00:02:36.720 14:08:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.720 14:08:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.720 ************************************ 00:02:36.720 END TEST ubsan 00:02:36.720 ************************************ 00:02:36.720 14:08:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:36.720 14:08:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.720 14:08:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.720 14:08:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:36.978 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:36.978 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.545 Using 'verbs' RDMA provider 00:02:50.734 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:05.623 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:05.623 Creating mk/config.mk...done. 00:03:05.623 Creating mk/cc.flags.mk...done. 00:03:05.623 Type 'make' to build. 00:03:05.623 14:08:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:05.623 14:08:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.623 14:08:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.623 14:08:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.623 ************************************ 00:03:05.623 START TEST make 00:03:05.623 ************************************ 00:03:05.623 14:08:28 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:17.824 The Meson build system 00:03:17.824 Version: 1.5.0 00:03:17.824 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:17.824 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:17.824 Build type: native build 00:03:17.824 Program cat found: YES (/usr/bin/cat) 00:03:17.824 Project name: DPDK 00:03:17.824 Project version: 24.03.0 00:03:17.824 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:17.824 C linker for the host machine: cc ld.bfd 2.40-14 00:03:17.824 Host machine cpu family: x86_64 00:03:17.824 Host machine cpu: x86_64 00:03:17.824 Message: ## Building in Developer Mode ## 00:03:17.824 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:17.824 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:17.824 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:17.824 Program python3 found: YES (/usr/bin/python3) 00:03:17.824 Program cat found: YES (/usr/bin/cat) 00:03:17.824 Compiler for C supports arguments -march=native: YES 00:03:17.824 Checking for size of "void *" : 8 00:03:17.824 Checking for size of "void *" : 8 (cached) 00:03:17.824 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:17.824 Library m found: YES 00:03:17.824 Library numa found: YES 00:03:17.824 Has header "numaif.h" : YES 00:03:17.824 Library fdt found: NO 00:03:17.824 Library execinfo found: NO 00:03:17.824 Has header "execinfo.h" : YES 00:03:17.824 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:17.824 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:17.824 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:17.824 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:17.824 Run-time dependency openssl found: YES 3.1.1 00:03:17.824 Run-time dependency libpcap found: YES 1.10.4 00:03:17.824 Has header "pcap.h" with dependency libpcap: YES 00:03:17.824 Compiler for C supports arguments -Wcast-qual: YES 00:03:17.824 Compiler for C supports arguments -Wdeprecated: YES 00:03:17.824 Compiler for C supports arguments -Wformat: YES 00:03:17.824 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:17.824 Compiler for C supports arguments -Wformat-security: NO 00:03:17.824 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:17.824 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:17.824 Compiler for C supports arguments -Wnested-externs: YES 00:03:17.824 Compiler for C supports arguments -Wold-style-definition: YES 00:03:17.824 Compiler for C supports arguments -Wpointer-arith: YES 00:03:17.824 Compiler for C supports arguments -Wsign-compare: YES 00:03:17.824 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:17.824 Compiler for C supports arguments -Wundef: YES 00:03:17.824 Compiler for C supports arguments -Wwrite-strings: YES 00:03:17.824 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:17.824 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:17.824 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:17.824 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:17.824 Program objdump found: YES (/usr/bin/objdump) 00:03:17.824 Compiler for C supports arguments -mavx512f: YES 00:03:17.824 Checking if "AVX512 checking" compiles: YES 00:03:17.824 Fetching value of define "__SSE4_2__" : 1 00:03:17.824 Fetching value of define "__AES__" : 1 00:03:17.824 Fetching value of define "__AVX__" : 1 00:03:17.824 Fetching value of define "__AVX2__" : 1 00:03:17.824 Fetching value of define "__AVX512BW__" : (undefined) 00:03:17.824 Fetching value of define "__AVX512CD__" : (undefined) 00:03:17.824 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:17.824 Fetching value of define "__AVX512F__" : (undefined) 00:03:17.824 Fetching value of define "__AVX512VL__" : (undefined) 00:03:17.824 Fetching value of define "__PCLMUL__" : 1 00:03:17.824 Fetching value of define "__RDRND__" : 1 00:03:17.824 Fetching value of define "__RDSEED__" : 1 00:03:17.824 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:17.824 Fetching value of define "__znver1__" : (undefined) 00:03:17.824 Fetching value of define "__znver2__" : (undefined) 00:03:17.824 Fetching value of define "__znver3__" : (undefined) 00:03:17.824 Fetching value of define "__znver4__" : (undefined) 00:03:17.824 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:17.824 Message: lib/log: Defining dependency "log" 00:03:17.824 Message: lib/kvargs: Defining dependency "kvargs" 00:03:17.824 Message: lib/telemetry: Defining dependency "telemetry" 00:03:17.824 Checking for function "getentropy" : NO 00:03:17.824 Message: lib/eal: Defining dependency "eal" 00:03:17.824 Message: lib/ring: Defining dependency "ring" 00:03:17.824 Message: lib/rcu: Defining dependency "rcu" 00:03:17.824 Message: lib/mempool: Defining dependency "mempool" 00:03:17.824 Message: lib/mbuf: Defining dependency "mbuf" 00:03:17.824 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:17.824 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:17.824 Compiler for C supports arguments -mpclmul: YES 00:03:17.824 Compiler for C supports arguments -maes: YES 00:03:17.824 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:17.824 Compiler for C supports arguments -mavx512bw: YES 00:03:17.824 Compiler for C supports arguments -mavx512dq: YES 00:03:17.824 Compiler for C supports arguments -mavx512vl: YES 00:03:17.824 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:17.824 Compiler for C supports arguments -mavx2: YES 00:03:17.824 Compiler for C supports arguments -mavx: YES 00:03:17.824 Message: lib/net: Defining dependency "net" 00:03:17.824 Message: lib/meter: Defining dependency "meter" 00:03:17.824 Message: lib/ethdev: Defining dependency "ethdev" 00:03:17.824 Message: lib/pci: Defining dependency "pci" 00:03:17.824 Message: lib/cmdline: Defining dependency "cmdline" 00:03:17.824 Message: lib/hash: Defining dependency "hash" 00:03:17.824 Message: lib/timer: Defining dependency "timer" 00:03:17.824 Message: lib/compressdev: Defining dependency "compressdev" 00:03:17.824 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:17.824 Message: lib/dmadev: Defining dependency "dmadev" 00:03:17.824 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:17.824 Message: lib/power: Defining dependency "power" 00:03:17.824 Message: lib/reorder: Defining dependency "reorder" 00:03:17.824 Message: lib/security: Defining dependency "security" 00:03:17.824 Has header "linux/userfaultfd.h" : YES 00:03:17.824 Has header "linux/vduse.h" : YES 00:03:17.824 Message: lib/vhost: Defining dependency "vhost" 00:03:17.824 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:17.824 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:17.824 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:17.824 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:17.824 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:17.824 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:17.824 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:17.824 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:17.824 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:17.824 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:17.824 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:17.824 Configuring doxy-api-html.conf using configuration 00:03:17.824 Configuring doxy-api-man.conf using configuration 00:03:17.824 Program mandb found: YES (/usr/bin/mandb) 00:03:17.824 Program sphinx-build found: NO 00:03:17.824 Configuring rte_build_config.h using configuration 00:03:17.824 Message: 00:03:17.824 ================= 00:03:17.824 Applications Enabled 00:03:17.824 ================= 00:03:17.824 00:03:17.824 apps: 00:03:17.824 00:03:17.824 00:03:17.824 Message: 00:03:17.824 ================= 00:03:17.824 Libraries Enabled 00:03:17.824 ================= 00:03:17.824 00:03:17.824 libs: 00:03:17.824 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:17.824 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:17.824 cryptodev, dmadev, power, reorder, security, vhost, 00:03:17.824 00:03:17.824 Message: 00:03:17.824 =============== 00:03:17.824 Drivers Enabled 00:03:17.824 =============== 00:03:17.824 00:03:17.824 common: 00:03:17.824 00:03:17.824 bus: 00:03:17.824 pci, vdev, 00:03:17.824 mempool: 00:03:17.824 ring, 00:03:17.824 dma: 00:03:17.824 00:03:17.824 net: 00:03:17.824 00:03:17.824 crypto: 00:03:17.824 00:03:17.824 compress: 00:03:17.824 00:03:17.824 vdpa: 00:03:17.824 00:03:17.824 00:03:17.824 Message: 00:03:17.824 ================= 00:03:17.824 Content Skipped 00:03:17.824 ================= 00:03:17.824 00:03:17.824 apps: 00:03:17.824 dumpcap: explicitly disabled via build config 00:03:17.824 graph: explicitly disabled via build config 00:03:17.824 pdump: explicitly disabled via build config 00:03:17.824 proc-info: explicitly disabled via build config 00:03:17.824 test-acl: explicitly disabled via build config 00:03:17.824 test-bbdev: explicitly disabled via build config 00:03:17.824 test-cmdline: explicitly disabled via build config 00:03:17.824 test-compress-perf: explicitly disabled via build config 00:03:17.824 test-crypto-perf: explicitly disabled via build config 00:03:17.824 test-dma-perf: explicitly disabled via build config 00:03:17.824 test-eventdev: explicitly disabled via build config 00:03:17.824 test-fib: explicitly disabled via build config 00:03:17.824 test-flow-perf: explicitly disabled via build config 00:03:17.824 test-gpudev: explicitly disabled via build config 00:03:17.824 test-mldev: explicitly disabled via build config 00:03:17.825 test-pipeline: explicitly disabled via build config 00:03:17.825 test-pmd: explicitly disabled via build config 00:03:17.825 test-regex: explicitly disabled via build config 00:03:17.825 test-sad: explicitly disabled via build config 00:03:17.825 test-security-perf: explicitly disabled via build config 00:03:17.825 00:03:17.825 libs: 00:03:17.825 argparse: explicitly disabled via build config 00:03:17.825 metrics: explicitly disabled via build config 00:03:17.825 acl: explicitly disabled via build config 00:03:17.825 bbdev: explicitly disabled via build config 00:03:17.825 bitratestats: explicitly disabled via build config 00:03:17.825 bpf: explicitly disabled via build config 00:03:17.825 cfgfile: explicitly disabled via build config 00:03:17.825 distributor: explicitly disabled via build config 00:03:17.825 efd: explicitly disabled via build config 00:03:17.825 eventdev: explicitly disabled via build config 00:03:17.825 dispatcher: explicitly disabled via build config 00:03:17.825 gpudev: explicitly disabled via build config 00:03:17.825 gro: explicitly disabled via build config 00:03:17.825 gso: explicitly disabled via build config 00:03:17.825 ip_frag: explicitly disabled via build config 00:03:17.825 jobstats: explicitly disabled via build config 00:03:17.825 latencystats: explicitly disabled via build config 00:03:17.825 lpm: explicitly disabled via build config 00:03:17.825 member: explicitly disabled via build config 00:03:17.825 pcapng: explicitly disabled via build config 00:03:17.825 rawdev: explicitly disabled via build config 00:03:17.825 regexdev: explicitly disabled via build config 00:03:17.825 mldev: explicitly disabled via build config 00:03:17.825 rib: explicitly disabled via build config 00:03:17.825 sched: explicitly disabled via build config 00:03:17.825 stack: explicitly disabled via build config 00:03:17.825 ipsec: explicitly disabled via build config 00:03:17.825 pdcp: explicitly disabled via build config 00:03:17.825 fib: explicitly disabled via build config 00:03:17.825 port: explicitly disabled via build config 00:03:17.825 pdump: explicitly disabled via build config 00:03:17.825 table: explicitly disabled via build config 00:03:17.825 pipeline: explicitly disabled via build config 00:03:17.825 graph: explicitly disabled via build config 00:03:17.825 node: explicitly disabled via build config 00:03:17.825 00:03:17.825 drivers: 00:03:17.825 common/cpt: not in enabled drivers build config 00:03:17.825 common/dpaax: not in enabled drivers build config 00:03:17.825 common/iavf: not in enabled drivers build config 00:03:17.825 common/idpf: not in enabled drivers build config 00:03:17.825 common/ionic: not in enabled drivers build config 00:03:17.825 common/mvep: not in enabled drivers build config 00:03:17.825 common/octeontx: not in enabled drivers build config 00:03:17.825 bus/auxiliary: not in enabled drivers build config 00:03:17.825 bus/cdx: not in enabled drivers build config 00:03:17.825 bus/dpaa: not in enabled drivers build config 00:03:17.825 bus/fslmc: not in enabled drivers build config 00:03:17.825 bus/ifpga: not in enabled drivers build config 00:03:17.825 bus/platform: not in enabled drivers build config 00:03:17.825 bus/uacce: not in enabled drivers build config 00:03:17.825 bus/vmbus: not in enabled drivers build config 00:03:17.825 common/cnxk: not in enabled drivers build config 00:03:17.825 common/mlx5: not in enabled drivers build config 00:03:17.825 common/nfp: not in enabled drivers build config 00:03:17.825 common/nitrox: not in enabled drivers build config 00:03:17.825 common/qat: not in enabled drivers build config 00:03:17.825 common/sfc_efx: not in enabled drivers build config 00:03:17.825 mempool/bucket: not in enabled drivers build config 00:03:17.825 mempool/cnxk: not in enabled drivers build config 00:03:17.825 mempool/dpaa: not in enabled drivers build config 00:03:17.825 mempool/dpaa2: not in enabled drivers build config 00:03:17.825 mempool/octeontx: not in enabled drivers build config 00:03:17.825 mempool/stack: not in enabled drivers build config 00:03:17.825 dma/cnxk: not in enabled drivers build config 00:03:17.825 dma/dpaa: not in enabled drivers build config 00:03:17.825 dma/dpaa2: not in enabled drivers build config 00:03:17.825 dma/hisilicon: not in enabled drivers build config 00:03:17.825 dma/idxd: not in enabled drivers build config 00:03:17.825 dma/ioat: not in enabled drivers build config 00:03:17.825 dma/skeleton: not in enabled drivers build config 00:03:17.825 net/af_packet: not in enabled drivers build config 00:03:17.825 net/af_xdp: not in enabled drivers build config 00:03:17.825 net/ark: not in enabled drivers build config 00:03:17.825 net/atlantic: not in enabled drivers build config 00:03:17.825 net/avp: not in enabled drivers build config 00:03:17.825 net/axgbe: not in enabled drivers build config 00:03:17.825 net/bnx2x: not in enabled drivers build config 00:03:17.825 net/bnxt: not in enabled drivers build config 00:03:17.825 net/bonding: not in enabled drivers build config 00:03:17.825 net/cnxk: not in enabled drivers build config 00:03:17.825 net/cpfl: not in enabled drivers build config 00:03:17.825 net/cxgbe: not in enabled drivers build config 00:03:17.825 net/dpaa: not in enabled drivers build config 00:03:17.825 net/dpaa2: not in enabled drivers build config 00:03:17.825 net/e1000: not in enabled drivers build config 00:03:17.825 net/ena: not in enabled drivers build config 00:03:17.825 net/enetc: not in enabled drivers build config 00:03:17.825 net/enetfec: not in enabled drivers build config 00:03:17.825 net/enic: not in enabled drivers build config 00:03:17.825 net/failsafe: not in enabled drivers build config 00:03:17.825 net/fm10k: not in enabled drivers build config 00:03:17.825 net/gve: not in enabled drivers build config 00:03:17.825 net/hinic: not in enabled drivers build config 00:03:17.825 net/hns3: not in enabled drivers build config 00:03:17.825 net/i40e: not in enabled drivers build config 00:03:17.825 net/iavf: not in enabled drivers build config 00:03:17.825 net/ice: not in enabled drivers build config 00:03:17.825 net/idpf: not in enabled drivers build config 00:03:17.825 net/igc: not in enabled drivers build config 00:03:17.825 net/ionic: not in enabled drivers build config 00:03:17.825 net/ipn3ke: not in enabled drivers build config 00:03:17.825 net/ixgbe: not in enabled drivers build config 00:03:17.825 net/mana: not in enabled drivers build config 00:03:17.825 net/memif: not in enabled drivers build config 00:03:17.825 net/mlx4: not in enabled drivers build config 00:03:17.825 net/mlx5: not in enabled drivers build config 00:03:17.825 net/mvneta: not in enabled drivers build config 00:03:17.825 net/mvpp2: not in enabled drivers build config 00:03:17.825 net/netvsc: not in enabled drivers build config 00:03:17.825 net/nfb: not in enabled drivers build config 00:03:17.825 net/nfp: not in enabled drivers build config 00:03:17.825 net/ngbe: not in enabled drivers build config 00:03:17.825 net/null: not in enabled drivers build config 00:03:17.825 net/octeontx: not in enabled drivers build config 00:03:17.825 net/octeon_ep: not in enabled drivers build config 00:03:17.825 net/pcap: not in enabled drivers build config 00:03:17.825 net/pfe: not in enabled drivers build config 00:03:17.825 net/qede: not in enabled drivers build config 00:03:17.825 net/ring: not in enabled drivers build config 00:03:17.825 net/sfc: not in enabled drivers build config 00:03:17.825 net/softnic: not in enabled drivers build config 00:03:17.825 net/tap: not in enabled drivers build config 00:03:17.825 net/thunderx: not in enabled drivers build config 00:03:17.825 net/txgbe: not in enabled drivers build config 00:03:17.825 net/vdev_netvsc: not in enabled drivers build config 00:03:17.825 net/vhost: not in enabled drivers build config 00:03:17.825 net/virtio: not in enabled drivers build config 00:03:17.825 net/vmxnet3: not in enabled drivers build config 00:03:17.825 raw/*: missing internal dependency, "rawdev" 00:03:17.825 crypto/armv8: not in enabled drivers build config 00:03:17.825 crypto/bcmfs: not in enabled drivers build config 00:03:17.825 crypto/caam_jr: not in enabled drivers build config 00:03:17.825 crypto/ccp: not in enabled drivers build config 00:03:17.825 crypto/cnxk: not in enabled drivers build config 00:03:17.825 crypto/dpaa_sec: not in enabled drivers build config 00:03:17.825 crypto/dpaa2_sec: not in enabled drivers build config 00:03:17.825 crypto/ipsec_mb: not in enabled drivers build config 00:03:17.825 crypto/mlx5: not in enabled drivers build config 00:03:17.825 crypto/mvsam: not in enabled drivers build config 00:03:17.825 crypto/nitrox: not in enabled drivers build config 00:03:17.825 crypto/null: not in enabled drivers build config 00:03:17.825 crypto/octeontx: not in enabled drivers build config 00:03:17.825 crypto/openssl: not in enabled drivers build config 00:03:17.825 crypto/scheduler: not in enabled drivers build config 00:03:17.825 crypto/uadk: not in enabled drivers build config 00:03:17.825 crypto/virtio: not in enabled drivers build config 00:03:17.825 compress/isal: not in enabled drivers build config 00:03:17.825 compress/mlx5: not in enabled drivers build config 00:03:17.825 compress/nitrox: not in enabled drivers build config 00:03:17.825 compress/octeontx: not in enabled drivers build config 00:03:17.825 compress/zlib: not in enabled drivers build config 00:03:17.825 regex/*: missing internal dependency, "regexdev" 00:03:17.825 ml/*: missing internal dependency, "mldev" 00:03:17.825 vdpa/ifc: not in enabled drivers build config 00:03:17.825 vdpa/mlx5: not in enabled drivers build config 00:03:17.825 vdpa/nfp: not in enabled drivers build config 00:03:17.825 vdpa/sfc: not in enabled drivers build config 00:03:17.825 event/*: missing internal dependency, "eventdev" 00:03:17.825 baseband/*: missing internal dependency, "bbdev" 00:03:17.825 gpu/*: missing internal dependency, "gpudev" 00:03:17.825 00:03:17.825 00:03:17.825 Build targets in project: 85 00:03:17.825 00:03:17.825 DPDK 24.03.0 00:03:17.825 00:03:17.825 User defined options 00:03:17.825 buildtype : debug 00:03:17.825 default_library : shared 00:03:17.825 libdir : lib 00:03:17.825 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:17.825 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:17.825 c_link_args : 00:03:17.825 cpu_instruction_set: native 00:03:17.825 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:17.825 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:17.825 enable_docs : false 00:03:17.825 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:17.825 enable_kmods : false 00:03:17.825 max_lcores : 128 00:03:17.825 tests : false 00:03:17.825 00:03:17.825 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:17.826 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:17.826 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:17.826 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:17.826 [3/268] Linking static target lib/librte_kvargs.a 00:03:17.826 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:17.826 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:17.826 [6/268] Linking static target lib/librte_log.a 00:03:18.084 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.084 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:18.342 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:18.342 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:18.342 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:18.342 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:18.342 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:18.342 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:18.601 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:18.601 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:18.601 [17/268] Linking static target lib/librte_telemetry.a 00:03:18.601 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.601 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:18.601 [20/268] Linking target lib/librte_log.so.24.1 00:03:19.167 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:19.167 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:19.167 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:19.167 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:19.167 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:19.167 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:19.426 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:19.426 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:19.426 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:19.426 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:19.426 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.426 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:19.426 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:19.684 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:19.684 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:19.684 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:19.684 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:19.943 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:20.201 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:20.201 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:20.201 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:20.201 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:20.201 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:20.470 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:20.470 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:20.470 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:20.470 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:20.730 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:20.730 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:20.730 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:20.730 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:20.988 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:21.246 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:21.246 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:21.246 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:21.246 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:21.504 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:21.504 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:21.762 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:21.762 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:21.762 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:21.762 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:21.762 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:22.021 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:22.279 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:22.279 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:22.279 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:22.537 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:22.537 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:22.796 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:22.796 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:22.796 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:22.796 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:22.796 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:22.796 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:22.796 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:23.054 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:23.313 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:23.313 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:23.313 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:23.571 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:23.571 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:23.571 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:23.830 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:23.830 [85/268] Linking static target lib/librte_ring.a 00:03:23.830 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:23.830 [87/268] Linking static target lib/librte_eal.a 00:03:23.830 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:24.088 [89/268] Linking static target lib/librte_rcu.a 00:03:24.088 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:24.088 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:24.088 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:24.088 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:24.347 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:24.347 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:24.347 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.347 [97/268] Linking static target lib/librte_mempool.a 00:03:24.347 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:24.347 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.606 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:24.606 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:24.606 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:24.606 [103/268] Linking static target lib/librte_mbuf.a 00:03:24.867 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:24.867 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:25.126 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:25.126 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:25.126 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:25.126 [109/268] Linking static target lib/librte_net.a 00:03:25.126 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:25.126 [111/268] Linking static target lib/librte_meter.a 00:03:25.384 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:25.384 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:25.384 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.642 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.642 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.642 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:25.642 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:25.918 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.200 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:26.458 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:26.458 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:26.458 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:26.458 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:26.458 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:26.716 [126/268] Linking static target lib/librte_pci.a 00:03:26.716 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:26.716 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:26.974 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:26.974 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:26.974 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.974 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:27.233 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:27.233 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:27.233 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:27.233 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:27.233 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:27.233 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:27.233 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:27.233 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:27.233 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:27.233 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:27.233 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:27.491 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:27.491 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:27.491 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:27.491 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:27.491 [148/268] Linking static target lib/librte_ethdev.a 00:03:28.058 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:28.058 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:28.058 [151/268] Linking static target lib/librte_cmdline.a 00:03:28.058 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:28.058 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:28.058 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:28.058 [155/268] Linking static target lib/librte_timer.a 00:03:28.316 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:28.316 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:28.316 [158/268] Linking static target lib/librte_hash.a 00:03:28.316 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:28.574 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:28.574 [161/268] Linking static target lib/librte_compressdev.a 00:03:28.833 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:28.833 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.833 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:28.833 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:29.091 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:29.349 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:29.349 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:29.349 [169/268] Linking static target lib/librte_dmadev.a 00:03:29.349 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:29.349 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:29.608 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:29.608 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:29.608 [174/268] Linking static target lib/librte_cryptodev.a 00:03:29.608 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.608 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.866 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.866 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:30.124 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:30.124 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:30.124 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.384 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:30.384 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:30.384 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:30.384 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:30.384 [186/268] Linking static target lib/librte_power.a 00:03:30.645 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:30.645 [188/268] Linking static target lib/librte_reorder.a 00:03:30.903 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:30.903 [190/268] Linking static target lib/librte_security.a 00:03:31.161 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:31.161 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:31.161 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:31.161 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:31.419 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.678 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.678 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.935 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:31.935 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:31.935 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:32.193 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:32.193 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.452 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.452 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.710 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.710 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:32.710 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:32.710 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:32.710 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:32.969 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:32.969 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:32.969 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:32.969 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.228 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.228 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.228 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:33.228 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:33.228 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.228 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:33.228 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.228 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:33.228 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:33.486 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.486 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.486 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.486 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.486 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:33.745 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.679 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.679 [230/268] Linking static target lib/librte_vhost.a 00:03:35.245 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.245 [232/268] Linking target lib/librte_eal.so.24.1 00:03:35.504 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:35.504 [234/268] Linking target lib/librte_meter.so.24.1 00:03:35.504 [235/268] Linking target lib/librte_timer.so.24.1 00:03:35.504 [236/268] Linking target lib/librte_ring.so.24.1 00:03:35.504 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:35.504 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:35.504 [239/268] Linking target lib/librte_pci.so.24.1 00:03:35.504 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:35.504 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:35.504 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:35.504 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:35.504 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:35.504 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:35.504 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:35.504 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:35.763 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.763 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:35.763 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:35.763 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:35.763 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:36.023 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:36.023 [254/268] Linking target lib/librte_net.so.24.1 00:03:36.023 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:36.023 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:36.023 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:36.023 [258/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.023 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:36.023 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:36.282 [261/268] Linking target lib/librte_security.so.24.1 00:03:36.282 [262/268] Linking target lib/librte_hash.so.24.1 00:03:36.282 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:36.282 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:36.282 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:36.282 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:36.282 [267/268] Linking target lib/librte_power.so.24.1 00:03:36.541 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:36.541 INFO: autodetecting backend as ninja 00:03:36.541 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:03.097 CC lib/ut/ut.o 00:04:03.097 CC lib/ut_mock/mock.o 00:04:03.097 CC lib/log/log.o 00:04:03.097 CC lib/log/log_deprecated.o 00:04:03.097 CC lib/log/log_flags.o 00:04:03.097 LIB libspdk_ut.a 00:04:03.097 LIB libspdk_log.a 00:04:03.097 LIB libspdk_ut_mock.a 00:04:03.097 SO libspdk_ut.so.2.0 00:04:03.097 SO libspdk_ut_mock.so.6.0 00:04:03.097 SO libspdk_log.so.7.1 00:04:03.097 SYMLINK libspdk_ut.so 00:04:03.097 SYMLINK libspdk_ut_mock.so 00:04:03.097 SYMLINK libspdk_log.so 00:04:03.097 CC lib/util/base64.o 00:04:03.097 CC lib/ioat/ioat.o 00:04:03.097 CC lib/util/bit_array.o 00:04:03.097 CC lib/util/cpuset.o 00:04:03.097 CC lib/util/crc32.o 00:04:03.097 CC lib/dma/dma.o 00:04:03.097 CC lib/util/crc32c.o 00:04:03.097 CC lib/util/crc16.o 00:04:03.097 CXX lib/trace_parser/trace.o 00:04:03.097 CC lib/vfio_user/host/vfio_user_pci.o 00:04:03.097 CC lib/util/crc32_ieee.o 00:04:03.097 CC lib/vfio_user/host/vfio_user.o 00:04:03.097 CC lib/util/crc64.o 00:04:03.097 CC lib/util/dif.o 00:04:03.097 CC lib/util/fd.o 00:04:03.097 LIB libspdk_dma.a 00:04:03.097 CC lib/util/fd_group.o 00:04:03.097 SO libspdk_dma.so.5.0 00:04:03.097 SYMLINK libspdk_dma.so 00:04:03.097 CC lib/util/file.o 00:04:03.097 CC lib/util/hexlify.o 00:04:03.097 CC lib/util/iov.o 00:04:03.097 LIB libspdk_ioat.a 00:04:03.097 CC lib/util/math.o 00:04:03.097 SO libspdk_ioat.so.7.0 00:04:03.097 CC lib/util/net.o 00:04:03.097 LIB libspdk_vfio_user.a 00:04:03.097 SYMLINK libspdk_ioat.so 00:04:03.097 CC lib/util/pipe.o 00:04:03.097 SO libspdk_vfio_user.so.5.0 00:04:03.097 CC lib/util/strerror_tls.o 00:04:03.097 CC lib/util/string.o 00:04:03.097 SYMLINK libspdk_vfio_user.so 00:04:03.097 CC lib/util/uuid.o 00:04:03.097 CC lib/util/xor.o 00:04:03.097 CC lib/util/zipf.o 00:04:03.097 CC lib/util/md5.o 00:04:03.097 LIB libspdk_util.a 00:04:03.097 SO libspdk_util.so.10.1 00:04:03.097 LIB libspdk_trace_parser.a 00:04:03.097 SYMLINK libspdk_util.so 00:04:03.097 SO libspdk_trace_parser.so.6.0 00:04:03.097 SYMLINK libspdk_trace_parser.so 00:04:03.097 CC lib/conf/conf.o 00:04:03.097 CC lib/env_dpdk/env.o 00:04:03.097 CC lib/env_dpdk/memory.o 00:04:03.097 CC lib/env_dpdk/pci.o 00:04:03.097 CC lib/env_dpdk/init.o 00:04:03.097 CC lib/rdma_utils/rdma_utils.o 00:04:03.097 CC lib/json/json_parse.o 00:04:03.097 CC lib/json/json_util.o 00:04:03.097 CC lib/idxd/idxd.o 00:04:03.097 CC lib/vmd/vmd.o 00:04:03.097 LIB libspdk_conf.a 00:04:03.097 CC lib/json/json_write.o 00:04:03.097 CC lib/env_dpdk/threads.o 00:04:03.097 SO libspdk_conf.so.6.0 00:04:03.097 LIB libspdk_rdma_utils.a 00:04:03.097 SO libspdk_rdma_utils.so.1.0 00:04:03.097 CC lib/env_dpdk/pci_ioat.o 00:04:03.097 SYMLINK libspdk_conf.so 00:04:03.097 CC lib/idxd/idxd_user.o 00:04:03.097 CC lib/env_dpdk/pci_virtio.o 00:04:03.097 SYMLINK libspdk_rdma_utils.so 00:04:03.097 CC lib/vmd/led.o 00:04:03.097 CC lib/env_dpdk/pci_vmd.o 00:04:03.097 CC lib/idxd/idxd_kernel.o 00:04:03.097 CC lib/env_dpdk/pci_idxd.o 00:04:03.097 LIB libspdk_json.a 00:04:03.097 CC lib/env_dpdk/pci_event.o 00:04:03.097 CC lib/env_dpdk/sigbus_handler.o 00:04:03.097 SO libspdk_json.so.6.0 00:04:03.097 CC lib/env_dpdk/pci_dpdk.o 00:04:03.097 LIB libspdk_vmd.a 00:04:03.097 SYMLINK libspdk_json.so 00:04:03.097 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:03.097 SO libspdk_vmd.so.6.0 00:04:03.097 LIB libspdk_idxd.a 00:04:03.097 SO libspdk_idxd.so.12.1 00:04:03.097 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.097 SYMLINK libspdk_vmd.so 00:04:03.097 CC lib/rdma_provider/common.o 00:04:03.097 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:03.097 SYMLINK libspdk_idxd.so 00:04:03.097 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.097 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.097 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.097 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.097 LIB libspdk_rdma_provider.a 00:04:03.097 SO libspdk_rdma_provider.so.7.0 00:04:03.097 SYMLINK libspdk_rdma_provider.so 00:04:03.097 LIB libspdk_jsonrpc.a 00:04:03.097 SO libspdk_jsonrpc.so.6.0 00:04:03.097 SYMLINK libspdk_jsonrpc.so 00:04:03.097 LIB libspdk_env_dpdk.a 00:04:03.097 SO libspdk_env_dpdk.so.15.1 00:04:03.097 CC lib/rpc/rpc.o 00:04:03.097 SYMLINK libspdk_env_dpdk.so 00:04:03.097 LIB libspdk_rpc.a 00:04:03.097 SO libspdk_rpc.so.6.0 00:04:03.097 SYMLINK libspdk_rpc.so 00:04:03.097 CC lib/keyring/keyring.o 00:04:03.097 CC lib/keyring/keyring_rpc.o 00:04:03.097 CC lib/trace/trace_flags.o 00:04:03.097 CC lib/trace/trace.o 00:04:03.097 CC lib/trace/trace_rpc.o 00:04:03.097 CC lib/notify/notify_rpc.o 00:04:03.097 CC lib/notify/notify.o 00:04:03.098 LIB libspdk_notify.a 00:04:03.357 SO libspdk_notify.so.6.0 00:04:03.357 LIB libspdk_trace.a 00:04:03.357 LIB libspdk_keyring.a 00:04:03.357 SO libspdk_trace.so.11.0 00:04:03.357 SYMLINK libspdk_notify.so 00:04:03.357 SO libspdk_keyring.so.2.0 00:04:03.357 SYMLINK libspdk_keyring.so 00:04:03.357 SYMLINK libspdk_trace.so 00:04:03.615 CC lib/sock/sock_rpc.o 00:04:03.615 CC lib/sock/sock.o 00:04:03.615 CC lib/thread/thread.o 00:04:03.615 CC lib/thread/iobuf.o 00:04:04.182 LIB libspdk_sock.a 00:04:04.182 SO libspdk_sock.so.10.0 00:04:04.182 SYMLINK libspdk_sock.so 00:04:04.441 CC lib/nvme/nvme_ctrlr.o 00:04:04.441 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.441 CC lib/nvme/nvme_fabric.o 00:04:04.441 CC lib/nvme/nvme_ns.o 00:04:04.441 CC lib/nvme/nvme_ns_cmd.o 00:04:04.441 CC lib/nvme/nvme_pcie.o 00:04:04.441 CC lib/nvme/nvme_pcie_common.o 00:04:04.441 CC lib/nvme/nvme_qpair.o 00:04:04.441 CC lib/nvme/nvme.o 00:04:05.376 CC lib/nvme/nvme_quirks.o 00:04:05.376 CC lib/nvme/nvme_transport.o 00:04:05.376 CC lib/nvme/nvme_discovery.o 00:04:05.376 LIB libspdk_thread.a 00:04:05.376 SO libspdk_thread.so.11.0 00:04:05.377 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:05.377 SYMLINK libspdk_thread.so 00:04:05.377 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:05.377 CC lib/nvme/nvme_tcp.o 00:04:05.377 CC lib/nvme/nvme_opal.o 00:04:05.635 CC lib/accel/accel.o 00:04:05.635 CC lib/nvme/nvme_io_msg.o 00:04:05.894 CC lib/nvme/nvme_poll_group.o 00:04:05.894 CC lib/nvme/nvme_zns.o 00:04:05.894 CC lib/nvme/nvme_stubs.o 00:04:06.153 CC lib/nvme/nvme_auth.o 00:04:06.153 CC lib/nvme/nvme_cuse.o 00:04:06.153 CC lib/nvme/nvme_rdma.o 00:04:06.411 CC lib/blob/blobstore.o 00:04:06.670 CC lib/accel/accel_rpc.o 00:04:06.670 CC lib/accel/accel_sw.o 00:04:06.670 CC lib/blob/request.o 00:04:06.670 CC lib/init/json_config.o 00:04:06.670 CC lib/blob/zeroes.o 00:04:06.670 CC lib/init/subsystem.o 00:04:06.928 LIB libspdk_accel.a 00:04:06.928 SO libspdk_accel.so.16.0 00:04:06.928 CC lib/init/subsystem_rpc.o 00:04:06.928 CC lib/init/rpc.o 00:04:06.928 CC lib/blob/blob_bs_dev.o 00:04:06.928 SYMLINK libspdk_accel.so 00:04:07.187 CC lib/virtio/virtio.o 00:04:07.187 CC lib/virtio/virtio_vhost_user.o 00:04:07.187 CC lib/virtio/virtio_vfio_user.o 00:04:07.187 CC lib/virtio/virtio_pci.o 00:04:07.188 LIB libspdk_init.a 00:04:07.188 CC lib/fsdev/fsdev.o 00:04:07.188 SO libspdk_init.so.6.0 00:04:07.188 CC lib/bdev/bdev.o 00:04:07.188 SYMLINK libspdk_init.so 00:04:07.188 CC lib/fsdev/fsdev_io.o 00:04:07.188 CC lib/bdev/bdev_rpc.o 00:04:07.446 CC lib/fsdev/fsdev_rpc.o 00:04:07.446 CC lib/bdev/bdev_zone.o 00:04:07.446 CC lib/bdev/part.o 00:04:07.446 LIB libspdk_virtio.a 00:04:07.446 SO libspdk_virtio.so.7.0 00:04:07.446 CC lib/bdev/scsi_nvme.o 00:04:07.446 SYMLINK libspdk_virtio.so 00:04:07.705 LIB libspdk_nvme.a 00:04:07.705 SO libspdk_nvme.so.15.0 00:04:07.705 CC lib/event/app.o 00:04:07.705 CC lib/event/reactor.o 00:04:07.705 CC lib/event/log_rpc.o 00:04:07.705 CC lib/event/app_rpc.o 00:04:07.705 CC lib/event/scheduler_static.o 00:04:07.705 LIB libspdk_fsdev.a 00:04:07.963 SO libspdk_fsdev.so.2.0 00:04:07.963 SYMLINK libspdk_fsdev.so 00:04:07.963 SYMLINK libspdk_nvme.so 00:04:08.221 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:08.221 LIB libspdk_event.a 00:04:08.221 SO libspdk_event.so.14.0 00:04:08.480 SYMLINK libspdk_event.so 00:04:08.738 LIB libspdk_fuse_dispatcher.a 00:04:08.738 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.738 SYMLINK libspdk_fuse_dispatcher.so 00:04:09.305 LIB libspdk_blob.a 00:04:09.564 SO libspdk_blob.so.12.0 00:04:09.564 SYMLINK libspdk_blob.so 00:04:09.822 LIB libspdk_bdev.a 00:04:09.822 CC lib/lvol/lvol.o 00:04:09.822 CC lib/blobfs/blobfs.o 00:04:09.822 CC lib/blobfs/tree.o 00:04:09.822 SO libspdk_bdev.so.17.0 00:04:10.081 SYMLINK libspdk_bdev.so 00:04:10.340 CC lib/nvmf/ctrlr_discovery.o 00:04:10.340 CC lib/nvmf/ctrlr_bdev.o 00:04:10.340 CC lib/nvmf/ctrlr.o 00:04:10.340 CC lib/scsi/dev.o 00:04:10.340 CC lib/nvmf/subsystem.o 00:04:10.340 CC lib/ftl/ftl_core.o 00:04:10.340 CC lib/ublk/ublk.o 00:04:10.340 CC lib/nbd/nbd.o 00:04:10.597 CC lib/scsi/lun.o 00:04:10.597 CC lib/ftl/ftl_init.o 00:04:10.597 LIB libspdk_blobfs.a 00:04:10.597 SO libspdk_blobfs.so.11.0 00:04:10.597 CC lib/ftl/ftl_layout.o 00:04:10.597 CC lib/nbd/nbd_rpc.o 00:04:10.855 SYMLINK libspdk_blobfs.so 00:04:10.855 CC lib/ftl/ftl_debug.o 00:04:10.855 LIB libspdk_lvol.a 00:04:10.855 CC lib/scsi/port.o 00:04:10.855 CC lib/ublk/ublk_rpc.o 00:04:10.855 SO libspdk_lvol.so.11.0 00:04:10.855 LIB libspdk_nbd.a 00:04:10.855 SYMLINK libspdk_lvol.so 00:04:10.855 SO libspdk_nbd.so.7.0 00:04:10.855 CC lib/ftl/ftl_io.o 00:04:10.855 CC lib/ftl/ftl_sb.o 00:04:10.855 CC lib/ftl/ftl_l2p.o 00:04:11.113 SYMLINK libspdk_nbd.so 00:04:11.113 CC lib/scsi/scsi.o 00:04:11.113 LIB libspdk_ublk.a 00:04:11.113 CC lib/ftl/ftl_l2p_flat.o 00:04:11.113 CC lib/ftl/ftl_nv_cache.o 00:04:11.113 CC lib/ftl/ftl_band.o 00:04:11.113 SO libspdk_ublk.so.3.0 00:04:11.113 SYMLINK libspdk_ublk.so 00:04:11.113 CC lib/nvmf/nvmf.o 00:04:11.113 CC lib/nvmf/nvmf_rpc.o 00:04:11.113 CC lib/scsi/scsi_bdev.o 00:04:11.113 CC lib/ftl/ftl_band_ops.o 00:04:11.113 CC lib/nvmf/transport.o 00:04:11.372 CC lib/nvmf/tcp.o 00:04:11.372 CC lib/nvmf/stubs.o 00:04:11.630 CC lib/nvmf/mdns_server.o 00:04:11.630 CC lib/scsi/scsi_pr.o 00:04:11.630 CC lib/scsi/scsi_rpc.o 00:04:11.889 CC lib/ftl/ftl_writer.o 00:04:11.889 CC lib/nvmf/rdma.o 00:04:11.889 CC lib/nvmf/auth.o 00:04:11.889 CC lib/scsi/task.o 00:04:11.889 CC lib/ftl/ftl_rq.o 00:04:11.889 CC lib/ftl/ftl_reloc.o 00:04:11.889 CC lib/ftl/ftl_l2p_cache.o 00:04:12.148 CC lib/ftl/ftl_p2l.o 00:04:12.148 CC lib/ftl/ftl_p2l_log.o 00:04:12.148 CC lib/ftl/mngt/ftl_mngt.o 00:04:12.148 LIB libspdk_scsi.a 00:04:12.148 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:12.148 SO libspdk_scsi.so.9.0 00:04:12.406 SYMLINK libspdk_scsi.so 00:04:12.406 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:12.406 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:12.406 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:12.406 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.406 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.680 CC lib/iscsi/conn.o 00:04:12.680 CC lib/iscsi/init_grp.o 00:04:12.680 CC lib/vhost/vhost.o 00:04:12.680 CC lib/vhost/vhost_rpc.o 00:04:12.680 CC lib/vhost/vhost_scsi.o 00:04:12.680 CC lib/vhost/vhost_blk.o 00:04:12.680 CC lib/vhost/rte_vhost_user.o 00:04:12.680 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.951 CC lib/iscsi/iscsi.o 00:04:12.951 CC lib/iscsi/param.o 00:04:12.951 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:13.210 CC lib/iscsi/portal_grp.o 00:04:13.210 CC lib/iscsi/tgt_node.o 00:04:13.210 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:13.469 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:13.469 CC lib/iscsi/iscsi_subsystem.o 00:04:13.469 CC lib/iscsi/iscsi_rpc.o 00:04:13.469 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:13.469 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:13.727 CC lib/ftl/utils/ftl_conf.o 00:04:13.727 CC lib/iscsi/task.o 00:04:13.727 CC lib/ftl/utils/ftl_md.o 00:04:13.727 CC lib/ftl/utils/ftl_mempool.o 00:04:13.727 CC lib/ftl/utils/ftl_bitmap.o 00:04:13.986 LIB libspdk_vhost.a 00:04:13.986 LIB libspdk_nvmf.a 00:04:13.986 CC lib/ftl/utils/ftl_property.o 00:04:13.986 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:13.986 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:13.986 SO libspdk_vhost.so.8.0 00:04:13.986 SO libspdk_nvmf.so.20.0 00:04:13.986 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:13.986 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:13.986 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:13.986 SYMLINK libspdk_vhost.so 00:04:13.986 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:14.245 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:14.245 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:14.245 SYMLINK libspdk_nvmf.so 00:04:14.245 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:14.245 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:14.245 LIB libspdk_iscsi.a 00:04:14.245 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:14.245 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:14.245 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:14.245 CC lib/ftl/base/ftl_base_dev.o 00:04:14.245 CC lib/ftl/base/ftl_base_bdev.o 00:04:14.245 SO libspdk_iscsi.so.8.0 00:04:14.503 CC lib/ftl/ftl_trace.o 00:04:14.503 SYMLINK libspdk_iscsi.so 00:04:14.503 LIB libspdk_ftl.a 00:04:14.762 SO libspdk_ftl.so.9.0 00:04:15.021 SYMLINK libspdk_ftl.so 00:04:15.589 CC module/env_dpdk/env_dpdk_rpc.o 00:04:15.589 CC module/blob/bdev/blob_bdev.o 00:04:15.589 CC module/sock/uring/uring.o 00:04:15.589 CC module/keyring/file/keyring.o 00:04:15.589 CC module/sock/posix/posix.o 00:04:15.589 CC module/accel/error/accel_error.o 00:04:15.589 CC module/accel/dsa/accel_dsa.o 00:04:15.589 CC module/accel/ioat/accel_ioat.o 00:04:15.589 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:15.589 CC module/fsdev/aio/fsdev_aio.o 00:04:15.589 LIB libspdk_env_dpdk_rpc.a 00:04:15.589 SO libspdk_env_dpdk_rpc.so.6.0 00:04:15.589 SYMLINK libspdk_env_dpdk_rpc.so 00:04:15.589 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:15.589 CC module/keyring/file/keyring_rpc.o 00:04:15.848 CC module/accel/ioat/accel_ioat_rpc.o 00:04:15.848 CC module/accel/error/accel_error_rpc.o 00:04:15.848 LIB libspdk_scheduler_dynamic.a 00:04:15.848 SO libspdk_scheduler_dynamic.so.4.0 00:04:15.848 LIB libspdk_blob_bdev.a 00:04:15.848 LIB libspdk_keyring_file.a 00:04:15.848 CC module/accel/dsa/accel_dsa_rpc.o 00:04:15.848 SO libspdk_blob_bdev.so.12.0 00:04:15.848 SYMLINK libspdk_scheduler_dynamic.so 00:04:15.848 SO libspdk_keyring_file.so.2.0 00:04:15.848 LIB libspdk_accel_ioat.a 00:04:15.848 SYMLINK libspdk_blob_bdev.so 00:04:15.848 SO libspdk_accel_ioat.so.6.0 00:04:15.848 LIB libspdk_accel_error.a 00:04:15.848 SYMLINK libspdk_keyring_file.so 00:04:15.848 SO libspdk_accel_error.so.2.0 00:04:15.848 SYMLINK libspdk_accel_ioat.so 00:04:16.106 LIB libspdk_accel_dsa.a 00:04:16.106 SYMLINK libspdk_accel_error.so 00:04:16.106 CC module/accel/iaa/accel_iaa.o 00:04:16.106 SO libspdk_accel_dsa.so.5.0 00:04:16.106 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:16.106 SYMLINK libspdk_accel_dsa.so 00:04:16.106 CC module/fsdev/aio/linux_aio_mgr.o 00:04:16.106 CC module/scheduler/gscheduler/gscheduler.o 00:04:16.106 CC module/keyring/linux/keyring.o 00:04:16.106 LIB libspdk_scheduler_dpdk_governor.a 00:04:16.365 CC module/accel/iaa/accel_iaa_rpc.o 00:04:16.365 LIB libspdk_sock_uring.a 00:04:16.365 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:16.365 CC module/keyring/linux/keyring_rpc.o 00:04:16.365 SO libspdk_sock_uring.so.5.0 00:04:16.365 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:16.365 LIB libspdk_sock_posix.a 00:04:16.365 LIB libspdk_scheduler_gscheduler.a 00:04:16.365 LIB libspdk_fsdev_aio.a 00:04:16.365 SYMLINK libspdk_sock_uring.so 00:04:16.365 SO libspdk_scheduler_gscheduler.so.4.0 00:04:16.365 CC module/blobfs/bdev/blobfs_bdev.o 00:04:16.365 SO libspdk_sock_posix.so.6.0 00:04:16.365 CC module/bdev/delay/vbdev_delay.o 00:04:16.365 SO libspdk_fsdev_aio.so.1.0 00:04:16.365 LIB libspdk_accel_iaa.a 00:04:16.365 LIB libspdk_keyring_linux.a 00:04:16.365 SYMLINK libspdk_scheduler_gscheduler.so 00:04:16.365 SO libspdk_accel_iaa.so.3.0 00:04:16.365 SO libspdk_keyring_linux.so.1.0 00:04:16.365 SYMLINK libspdk_fsdev_aio.so 00:04:16.365 SYMLINK libspdk_sock_posix.so 00:04:16.365 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:16.623 SYMLINK libspdk_keyring_linux.so 00:04:16.623 SYMLINK libspdk_accel_iaa.so 00:04:16.623 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:16.623 CC module/bdev/gpt/gpt.o 00:04:16.623 CC module/bdev/error/vbdev_error.o 00:04:16.623 CC module/bdev/lvol/vbdev_lvol.o 00:04:16.623 CC module/bdev/null/bdev_null.o 00:04:16.623 CC module/bdev/malloc/bdev_malloc.o 00:04:16.623 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:16.623 CC module/bdev/nvme/bdev_nvme.o 00:04:16.623 LIB libspdk_blobfs_bdev.a 00:04:16.623 CC module/bdev/gpt/vbdev_gpt.o 00:04:16.623 CC module/bdev/passthru/vbdev_passthru.o 00:04:16.623 SO libspdk_blobfs_bdev.so.6.0 00:04:16.623 LIB libspdk_bdev_delay.a 00:04:16.882 SO libspdk_bdev_delay.so.6.0 00:04:16.882 CC module/bdev/error/vbdev_error_rpc.o 00:04:16.882 SYMLINK libspdk_blobfs_bdev.so 00:04:16.882 CC module/bdev/null/bdev_null_rpc.o 00:04:16.882 SYMLINK libspdk_bdev_delay.so 00:04:16.882 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:16.882 LIB libspdk_bdev_error.a 00:04:16.882 CC module/bdev/raid/bdev_raid.o 00:04:16.882 LIB libspdk_bdev_null.a 00:04:16.882 LIB libspdk_bdev_gpt.a 00:04:16.882 LIB libspdk_bdev_malloc.a 00:04:16.882 SO libspdk_bdev_error.so.6.0 00:04:17.141 CC module/bdev/split/vbdev_split.o 00:04:17.141 SO libspdk_bdev_gpt.so.6.0 00:04:17.141 SO libspdk_bdev_null.so.6.0 00:04:17.141 CC module/bdev/split/vbdev_split_rpc.o 00:04:17.141 SO libspdk_bdev_malloc.so.6.0 00:04:17.141 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:17.141 LIB libspdk_bdev_passthru.a 00:04:17.141 SYMLINK libspdk_bdev_error.so 00:04:17.141 SYMLINK libspdk_bdev_gpt.so 00:04:17.141 SYMLINK libspdk_bdev_null.so 00:04:17.141 SYMLINK libspdk_bdev_malloc.so 00:04:17.141 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.141 SO libspdk_bdev_passthru.so.6.0 00:04:17.141 SYMLINK libspdk_bdev_passthru.so 00:04:17.141 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.400 LIB libspdk_bdev_split.a 00:04:17.400 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:17.400 CC module/bdev/aio/bdev_aio.o 00:04:17.400 SO libspdk_bdev_split.so.6.0 00:04:17.400 CC module/bdev/uring/bdev_uring.o 00:04:17.400 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.400 SYMLINK libspdk_bdev_split.so 00:04:17.400 CC module/bdev/uring/bdev_uring_rpc.o 00:04:17.400 CC module/bdev/ftl/bdev_ftl.o 00:04:17.400 LIB libspdk_bdev_lvol.a 00:04:17.400 SO libspdk_bdev_lvol.so.6.0 00:04:17.659 SYMLINK libspdk_bdev_lvol.so 00:04:17.659 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.659 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:17.659 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:17.659 LIB libspdk_bdev_aio.a 00:04:17.659 CC module/bdev/nvme/nvme_rpc.o 00:04:17.659 LIB libspdk_bdev_uring.a 00:04:17.659 SO libspdk_bdev_aio.so.6.0 00:04:17.659 CC module/bdev/iscsi/bdev_iscsi.o 00:04:17.659 SO libspdk_bdev_uring.so.6.0 00:04:17.659 LIB libspdk_bdev_zone_block.a 00:04:17.659 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:17.659 SO libspdk_bdev_zone_block.so.6.0 00:04:17.659 SYMLINK libspdk_bdev_aio.so 00:04:17.659 SYMLINK libspdk_bdev_uring.so 00:04:17.659 CC module/bdev/nvme/bdev_mdns_client.o 00:04:17.659 CC module/bdev/nvme/vbdev_opal.o 00:04:17.918 SYMLINK libspdk_bdev_zone_block.so 00:04:17.918 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:17.918 LIB libspdk_bdev_ftl.a 00:04:17.918 SO libspdk_bdev_ftl.so.6.0 00:04:17.918 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.918 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:17.918 SYMLINK libspdk_bdev_ftl.so 00:04:17.918 CC module/bdev/raid/raid0.o 00:04:17.918 CC module/bdev/raid/raid1.o 00:04:18.177 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:18.177 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:18.177 CC module/bdev/raid/concat.o 00:04:18.177 LIB libspdk_bdev_iscsi.a 00:04:18.177 SO libspdk_bdev_iscsi.so.6.0 00:04:18.177 SYMLINK libspdk_bdev_iscsi.so 00:04:18.436 LIB libspdk_bdev_virtio.a 00:04:18.436 LIB libspdk_bdev_raid.a 00:04:18.436 SO libspdk_bdev_virtio.so.6.0 00:04:18.436 SO libspdk_bdev_raid.so.6.0 00:04:18.436 SYMLINK libspdk_bdev_virtio.so 00:04:18.436 SYMLINK libspdk_bdev_raid.so 00:04:19.373 LIB libspdk_bdev_nvme.a 00:04:19.373 SO libspdk_bdev_nvme.so.7.1 00:04:19.373 SYMLINK libspdk_bdev_nvme.so 00:04:19.941 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:19.941 CC module/event/subsystems/vmd/vmd.o 00:04:19.941 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:19.941 CC module/event/subsystems/sock/sock.o 00:04:19.941 CC module/event/subsystems/fsdev/fsdev.o 00:04:19.941 CC module/event/subsystems/keyring/keyring.o 00:04:19.941 CC module/event/subsystems/scheduler/scheduler.o 00:04:19.941 CC module/event/subsystems/iobuf/iobuf.o 00:04:19.941 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:19.941 LIB libspdk_event_sock.a 00:04:19.941 LIB libspdk_event_vhost_blk.a 00:04:19.941 LIB libspdk_event_keyring.a 00:04:19.941 LIB libspdk_event_fsdev.a 00:04:19.941 LIB libspdk_event_vmd.a 00:04:20.200 LIB libspdk_event_scheduler.a 00:04:20.200 SO libspdk_event_sock.so.5.0 00:04:20.200 SO libspdk_event_keyring.so.1.0 00:04:20.200 SO libspdk_event_vhost_blk.so.3.0 00:04:20.200 LIB libspdk_event_iobuf.a 00:04:20.200 SO libspdk_event_fsdev.so.1.0 00:04:20.200 SO libspdk_event_vmd.so.6.0 00:04:20.200 SO libspdk_event_scheduler.so.4.0 00:04:20.200 SO libspdk_event_iobuf.so.3.0 00:04:20.200 SYMLINK libspdk_event_sock.so 00:04:20.200 SYMLINK libspdk_event_vhost_blk.so 00:04:20.200 SYMLINK libspdk_event_keyring.so 00:04:20.200 SYMLINK libspdk_event_vmd.so 00:04:20.200 SYMLINK libspdk_event_fsdev.so 00:04:20.200 SYMLINK libspdk_event_scheduler.so 00:04:20.200 SYMLINK libspdk_event_iobuf.so 00:04:20.458 CC module/event/subsystems/accel/accel.o 00:04:20.716 LIB libspdk_event_accel.a 00:04:20.716 SO libspdk_event_accel.so.6.0 00:04:20.716 SYMLINK libspdk_event_accel.so 00:04:20.975 CC module/event/subsystems/bdev/bdev.o 00:04:21.233 LIB libspdk_event_bdev.a 00:04:21.233 SO libspdk_event_bdev.so.6.0 00:04:21.233 SYMLINK libspdk_event_bdev.so 00:04:21.492 CC module/event/subsystems/nbd/nbd.o 00:04:21.492 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:21.492 CC module/event/subsystems/scsi/scsi.o 00:04:21.492 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:21.492 CC module/event/subsystems/ublk/ublk.o 00:04:21.750 LIB libspdk_event_nbd.a 00:04:21.750 SO libspdk_event_nbd.so.6.0 00:04:21.750 LIB libspdk_event_ublk.a 00:04:21.750 LIB libspdk_event_scsi.a 00:04:21.750 SO libspdk_event_ublk.so.3.0 00:04:21.750 SYMLINK libspdk_event_nbd.so 00:04:21.750 SO libspdk_event_scsi.so.6.0 00:04:21.750 SYMLINK libspdk_event_ublk.so 00:04:21.750 SYMLINK libspdk_event_scsi.so 00:04:21.750 LIB libspdk_event_nvmf.a 00:04:21.750 SO libspdk_event_nvmf.so.6.0 00:04:22.013 SYMLINK libspdk_event_nvmf.so 00:04:22.013 CC module/event/subsystems/iscsi/iscsi.o 00:04:22.013 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:22.273 LIB libspdk_event_vhost_scsi.a 00:04:22.273 SO libspdk_event_vhost_scsi.so.3.0 00:04:22.273 LIB libspdk_event_iscsi.a 00:04:22.273 SO libspdk_event_iscsi.so.6.0 00:04:22.273 SYMLINK libspdk_event_vhost_scsi.so 00:04:22.273 SYMLINK libspdk_event_iscsi.so 00:04:22.532 SO libspdk.so.6.0 00:04:22.532 SYMLINK libspdk.so 00:04:22.790 CC app/spdk_lspci/spdk_lspci.o 00:04:22.790 CC app/trace_record/trace_record.o 00:04:22.790 CXX app/trace/trace.o 00:04:22.790 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:22.790 CC app/nvmf_tgt/nvmf_main.o 00:04:22.790 CC app/iscsi_tgt/iscsi_tgt.o 00:04:22.790 CC app/spdk_tgt/spdk_tgt.o 00:04:22.790 CC examples/util/zipf/zipf.o 00:04:22.790 CC test/thread/poller_perf/poller_perf.o 00:04:22.790 CC examples/ioat/perf/perf.o 00:04:23.049 LINK spdk_lspci 00:04:23.049 LINK interrupt_tgt 00:04:23.049 LINK zipf 00:04:23.049 LINK nvmf_tgt 00:04:23.049 LINK poller_perf 00:04:23.049 LINK iscsi_tgt 00:04:23.049 LINK spdk_trace_record 00:04:23.049 LINK spdk_tgt 00:04:23.307 LINK ioat_perf 00:04:23.307 CC app/spdk_nvme_perf/perf.o 00:04:23.307 LINK spdk_trace 00:04:23.307 CC app/spdk_nvme_identify/identify.o 00:04:23.307 CC app/spdk_nvme_discover/discovery_aer.o 00:04:23.307 CC examples/ioat/verify/verify.o 00:04:23.566 TEST_HEADER include/spdk/accel.h 00:04:23.566 TEST_HEADER include/spdk/accel_module.h 00:04:23.566 TEST_HEADER include/spdk/assert.h 00:04:23.566 TEST_HEADER include/spdk/barrier.h 00:04:23.566 TEST_HEADER include/spdk/base64.h 00:04:23.566 TEST_HEADER include/spdk/bdev.h 00:04:23.566 TEST_HEADER include/spdk/bdev_module.h 00:04:23.566 TEST_HEADER include/spdk/bdev_zone.h 00:04:23.566 TEST_HEADER include/spdk/bit_array.h 00:04:23.566 TEST_HEADER include/spdk/bit_pool.h 00:04:23.566 TEST_HEADER include/spdk/blob_bdev.h 00:04:23.566 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:23.566 TEST_HEADER include/spdk/blobfs.h 00:04:23.566 TEST_HEADER include/spdk/blob.h 00:04:23.566 TEST_HEADER include/spdk/conf.h 00:04:23.566 TEST_HEADER include/spdk/config.h 00:04:23.566 TEST_HEADER include/spdk/cpuset.h 00:04:23.566 TEST_HEADER include/spdk/crc16.h 00:04:23.566 TEST_HEADER include/spdk/crc32.h 00:04:23.566 TEST_HEADER include/spdk/crc64.h 00:04:23.566 TEST_HEADER include/spdk/dif.h 00:04:23.566 CC app/spdk_top/spdk_top.o 00:04:23.566 TEST_HEADER include/spdk/dma.h 00:04:23.566 TEST_HEADER include/spdk/endian.h 00:04:23.566 TEST_HEADER include/spdk/env_dpdk.h 00:04:23.566 TEST_HEADER include/spdk/env.h 00:04:23.566 TEST_HEADER include/spdk/event.h 00:04:23.566 TEST_HEADER include/spdk/fd_group.h 00:04:23.566 TEST_HEADER include/spdk/fd.h 00:04:23.566 TEST_HEADER include/spdk/file.h 00:04:23.566 TEST_HEADER include/spdk/fsdev.h 00:04:23.566 TEST_HEADER include/spdk/fsdev_module.h 00:04:23.566 TEST_HEADER include/spdk/ftl.h 00:04:23.566 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:23.566 CC test/dma/test_dma/test_dma.o 00:04:23.566 TEST_HEADER include/spdk/gpt_spec.h 00:04:23.566 TEST_HEADER include/spdk/hexlify.h 00:04:23.566 CC examples/thread/thread/thread_ex.o 00:04:23.566 TEST_HEADER include/spdk/histogram_data.h 00:04:23.566 TEST_HEADER include/spdk/idxd.h 00:04:23.566 CC test/app/bdev_svc/bdev_svc.o 00:04:23.566 TEST_HEADER include/spdk/idxd_spec.h 00:04:23.566 TEST_HEADER include/spdk/init.h 00:04:23.566 TEST_HEADER include/spdk/ioat.h 00:04:23.566 TEST_HEADER include/spdk/ioat_spec.h 00:04:23.566 TEST_HEADER include/spdk/iscsi_spec.h 00:04:23.566 TEST_HEADER include/spdk/json.h 00:04:23.566 TEST_HEADER include/spdk/jsonrpc.h 00:04:23.566 TEST_HEADER include/spdk/keyring.h 00:04:23.566 TEST_HEADER include/spdk/keyring_module.h 00:04:23.566 TEST_HEADER include/spdk/likely.h 00:04:23.566 TEST_HEADER include/spdk/log.h 00:04:23.566 TEST_HEADER include/spdk/lvol.h 00:04:23.566 TEST_HEADER include/spdk/md5.h 00:04:23.566 TEST_HEADER include/spdk/memory.h 00:04:23.566 TEST_HEADER include/spdk/mmio.h 00:04:23.566 TEST_HEADER include/spdk/nbd.h 00:04:23.566 TEST_HEADER include/spdk/net.h 00:04:23.566 TEST_HEADER include/spdk/notify.h 00:04:23.566 TEST_HEADER include/spdk/nvme.h 00:04:23.566 TEST_HEADER include/spdk/nvme_intel.h 00:04:23.566 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:23.566 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:23.566 TEST_HEADER include/spdk/nvme_spec.h 00:04:23.566 TEST_HEADER include/spdk/nvme_zns.h 00:04:23.566 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:23.566 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:23.566 TEST_HEADER include/spdk/nvmf.h 00:04:23.566 TEST_HEADER include/spdk/nvmf_spec.h 00:04:23.566 TEST_HEADER include/spdk/nvmf_transport.h 00:04:23.566 TEST_HEADER include/spdk/opal.h 00:04:23.566 TEST_HEADER include/spdk/opal_spec.h 00:04:23.566 TEST_HEADER include/spdk/pci_ids.h 00:04:23.566 TEST_HEADER include/spdk/pipe.h 00:04:23.566 TEST_HEADER include/spdk/queue.h 00:04:23.566 TEST_HEADER include/spdk/reduce.h 00:04:23.566 TEST_HEADER include/spdk/rpc.h 00:04:23.566 TEST_HEADER include/spdk/scheduler.h 00:04:23.566 TEST_HEADER include/spdk/scsi.h 00:04:23.566 TEST_HEADER include/spdk/scsi_spec.h 00:04:23.566 TEST_HEADER include/spdk/sock.h 00:04:23.566 TEST_HEADER include/spdk/stdinc.h 00:04:23.566 TEST_HEADER include/spdk/string.h 00:04:23.566 TEST_HEADER include/spdk/thread.h 00:04:23.566 TEST_HEADER include/spdk/trace.h 00:04:23.566 TEST_HEADER include/spdk/trace_parser.h 00:04:23.566 TEST_HEADER include/spdk/tree.h 00:04:23.566 TEST_HEADER include/spdk/ublk.h 00:04:23.566 TEST_HEADER include/spdk/util.h 00:04:23.566 TEST_HEADER include/spdk/uuid.h 00:04:23.566 TEST_HEADER include/spdk/version.h 00:04:23.566 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:23.566 LINK spdk_nvme_discover 00:04:23.566 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:23.566 TEST_HEADER include/spdk/vhost.h 00:04:23.566 TEST_HEADER include/spdk/vmd.h 00:04:23.566 TEST_HEADER include/spdk/xor.h 00:04:23.566 TEST_HEADER include/spdk/zipf.h 00:04:23.566 CXX test/cpp_headers/accel.o 00:04:23.566 LINK verify 00:04:23.825 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:23.825 LINK bdev_svc 00:04:23.825 CXX test/cpp_headers/accel_module.o 00:04:23.825 LINK thread 00:04:23.825 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:24.084 CXX test/cpp_headers/assert.o 00:04:24.084 CC app/spdk_dd/spdk_dd.o 00:04:24.084 LINK test_dma 00:04:24.084 LINK spdk_nvme_perf 00:04:24.084 LINK nvme_fuzz 00:04:24.084 CC app/fio/nvme/fio_plugin.o 00:04:24.084 CXX test/cpp_headers/barrier.o 00:04:24.342 LINK spdk_nvme_identify 00:04:24.342 CC examples/sock/hello_world/hello_sock.o 00:04:24.342 CXX test/cpp_headers/base64.o 00:04:24.342 LINK spdk_top 00:04:24.342 CC examples/vmd/lsvmd/lsvmd.o 00:04:24.600 LINK spdk_dd 00:04:24.600 CC examples/idxd/perf/perf.o 00:04:24.600 LINK hello_sock 00:04:24.600 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:24.600 CXX test/cpp_headers/bdev.o 00:04:24.600 CC examples/accel/perf/accel_perf.o 00:04:24.600 LINK lsvmd 00:04:24.600 LINK spdk_nvme 00:04:24.859 CXX test/cpp_headers/bdev_module.o 00:04:24.859 CC examples/vmd/led/led.o 00:04:24.859 CC examples/blob/cli/blobcli.o 00:04:24.859 CC examples/blob/hello_world/hello_blob.o 00:04:24.859 LINK hello_fsdev 00:04:24.859 LINK idxd_perf 00:04:24.859 LINK led 00:04:24.859 CC app/fio/bdev/fio_plugin.o 00:04:24.859 CXX test/cpp_headers/bdev_zone.o 00:04:25.117 CC test/env/mem_callbacks/mem_callbacks.o 00:04:25.117 LINK hello_blob 00:04:25.117 CC test/env/vtophys/vtophys.o 00:04:25.117 LINK accel_perf 00:04:25.117 CXX test/cpp_headers/bit_array.o 00:04:25.117 CC app/vhost/vhost.o 00:04:25.117 CC examples/nvme/hello_world/hello_world.o 00:04:25.375 LINK vtophys 00:04:25.375 LINK blobcli 00:04:25.375 CXX test/cpp_headers/bit_pool.o 00:04:25.375 CC examples/nvme/reconnect/reconnect.o 00:04:25.375 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.375 LINK vhost 00:04:25.375 CC examples/nvme/arbitration/arbitration.o 00:04:25.375 LINK hello_world 00:04:25.633 LINK spdk_bdev 00:04:25.633 CXX test/cpp_headers/blob_bdev.o 00:04:25.633 LINK iscsi_fuzz 00:04:25.633 CC examples/nvme/hotplug/hotplug.o 00:04:25.633 LINK mem_callbacks 00:04:25.633 LINK reconnect 00:04:25.633 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.633 CC examples/nvme/abort/abort.o 00:04:25.633 CXX test/cpp_headers/blobfs_bdev.o 00:04:25.891 LINK arbitration 00:04:25.891 CC examples/bdev/hello_world/hello_bdev.o 00:04:25.891 LINK nvme_manage 00:04:25.891 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.891 LINK hotplug 00:04:25.891 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:25.891 LINK cmb_copy 00:04:25.891 CXX test/cpp_headers/blobfs.o 00:04:25.891 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:25.891 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:26.149 LINK env_dpdk_post_init 00:04:26.149 LINK hello_bdev 00:04:26.149 CXX test/cpp_headers/blob.o 00:04:26.149 CC test/env/memory/memory_ut.o 00:04:26.149 LINK abort 00:04:26.149 LINK pmr_persistence 00:04:26.149 CC test/rpc_client/rpc_client_test.o 00:04:26.149 CC test/event/event_perf/event_perf.o 00:04:26.149 CC test/nvme/aer/aer.o 00:04:26.408 CXX test/cpp_headers/conf.o 00:04:26.408 CC test/nvme/reset/reset.o 00:04:26.408 LINK event_perf 00:04:26.408 LINK rpc_client_test 00:04:26.408 CC test/nvme/sgl/sgl.o 00:04:26.408 LINK vhost_fuzz 00:04:26.408 CC test/env/pci/pci_ut.o 00:04:26.408 CC examples/bdev/bdevperf/bdevperf.o 00:04:26.408 CXX test/cpp_headers/config.o 00:04:26.408 CXX test/cpp_headers/cpuset.o 00:04:26.666 LINK aer 00:04:26.666 CC test/event/reactor/reactor.o 00:04:26.666 CC test/event/reactor_perf/reactor_perf.o 00:04:26.666 LINK reset 00:04:26.666 CC test/app/histogram_perf/histogram_perf.o 00:04:26.666 CXX test/cpp_headers/crc16.o 00:04:26.666 LINK sgl 00:04:26.666 CXX test/cpp_headers/crc32.o 00:04:26.666 LINK reactor 00:04:26.666 LINK reactor_perf 00:04:26.666 CXX test/cpp_headers/crc64.o 00:04:26.924 LINK histogram_perf 00:04:26.924 LINK pci_ut 00:04:26.924 CC test/nvme/e2edp/nvme_dp.o 00:04:26.924 CXX test/cpp_headers/dif.o 00:04:26.924 CC test/event/app_repeat/app_repeat.o 00:04:26.924 CC test/event/scheduler/scheduler.o 00:04:27.182 CC test/app/jsoncat/jsoncat.o 00:04:27.182 CXX test/cpp_headers/dma.o 00:04:27.182 CC test/accel/dif/dif.o 00:04:27.182 CC test/blobfs/mkfs/mkfs.o 00:04:27.182 LINK app_repeat 00:04:27.182 CC test/app/stub/stub.o 00:04:27.182 LINK nvme_dp 00:04:27.182 LINK jsoncat 00:04:27.182 LINK bdevperf 00:04:27.182 CXX test/cpp_headers/endian.o 00:04:27.441 LINK scheduler 00:04:27.441 CXX test/cpp_headers/env_dpdk.o 00:04:27.441 LINK memory_ut 00:04:27.441 LINK mkfs 00:04:27.441 CXX test/cpp_headers/env.o 00:04:27.441 LINK stub 00:04:27.441 CXX test/cpp_headers/event.o 00:04:27.441 CC test/nvme/overhead/overhead.o 00:04:27.699 CC test/nvme/err_injection/err_injection.o 00:04:27.699 CC test/nvme/startup/startup.o 00:04:27.699 CC test/nvme/simple_copy/simple_copy.o 00:04:27.699 CC test/nvme/connect_stress/connect_stress.o 00:04:27.699 CXX test/cpp_headers/fd_group.o 00:04:27.699 CC test/nvme/reserve/reserve.o 00:04:27.699 CC examples/nvmf/nvmf/nvmf.o 00:04:27.957 CC test/lvol/esnap/esnap.o 00:04:27.957 LINK overhead 00:04:27.957 CXX test/cpp_headers/fd.o 00:04:27.957 LINK dif 00:04:27.957 LINK startup 00:04:27.957 LINK err_injection 00:04:27.957 LINK connect_stress 00:04:27.957 LINK reserve 00:04:27.957 LINK simple_copy 00:04:27.957 CXX test/cpp_headers/file.o 00:04:27.957 LINK nvmf 00:04:28.215 CC test/nvme/boot_partition/boot_partition.o 00:04:28.215 CC test/nvme/compliance/nvme_compliance.o 00:04:28.215 CC test/nvme/fused_ordering/fused_ordering.o 00:04:28.215 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:28.215 CC test/nvme/fdp/fdp.o 00:04:28.215 CC test/nvme/cuse/cuse.o 00:04:28.215 CXX test/cpp_headers/fsdev.o 00:04:28.215 CXX test/cpp_headers/fsdev_module.o 00:04:28.215 LINK boot_partition 00:04:28.215 CC test/bdev/bdevio/bdevio.o 00:04:28.473 LINK fused_ordering 00:04:28.473 LINK doorbell_aers 00:04:28.473 CXX test/cpp_headers/ftl.o 00:04:28.473 CXX test/cpp_headers/fuse_dispatcher.o 00:04:28.473 CXX test/cpp_headers/gpt_spec.o 00:04:28.473 LINK nvme_compliance 00:04:28.473 LINK fdp 00:04:28.473 CXX test/cpp_headers/hexlify.o 00:04:28.473 CXX test/cpp_headers/histogram_data.o 00:04:28.473 CXX test/cpp_headers/idxd.o 00:04:28.731 CXX test/cpp_headers/idxd_spec.o 00:04:28.731 CXX test/cpp_headers/init.o 00:04:28.731 CXX test/cpp_headers/ioat.o 00:04:28.731 CXX test/cpp_headers/ioat_spec.o 00:04:28.731 CXX test/cpp_headers/iscsi_spec.o 00:04:28.731 CXX test/cpp_headers/json.o 00:04:28.731 LINK bdevio 00:04:28.731 CXX test/cpp_headers/jsonrpc.o 00:04:28.731 CXX test/cpp_headers/keyring.o 00:04:28.731 CXX test/cpp_headers/keyring_module.o 00:04:28.731 CXX test/cpp_headers/likely.o 00:04:28.731 CXX test/cpp_headers/log.o 00:04:28.731 CXX test/cpp_headers/lvol.o 00:04:28.989 CXX test/cpp_headers/md5.o 00:04:28.989 CXX test/cpp_headers/memory.o 00:04:28.989 CXX test/cpp_headers/mmio.o 00:04:28.989 CXX test/cpp_headers/nbd.o 00:04:28.989 CXX test/cpp_headers/net.o 00:04:28.989 CXX test/cpp_headers/notify.o 00:04:28.989 CXX test/cpp_headers/nvme.o 00:04:28.989 CXX test/cpp_headers/nvme_intel.o 00:04:28.989 CXX test/cpp_headers/nvme_ocssd.o 00:04:28.989 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:28.989 CXX test/cpp_headers/nvme_spec.o 00:04:29.247 CXX test/cpp_headers/nvme_zns.o 00:04:29.247 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.247 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:29.247 CXX test/cpp_headers/nvmf.o 00:04:29.247 CXX test/cpp_headers/nvmf_spec.o 00:04:29.247 CXX test/cpp_headers/nvmf_transport.o 00:04:29.247 CXX test/cpp_headers/opal.o 00:04:29.247 CXX test/cpp_headers/opal_spec.o 00:04:29.247 CXX test/cpp_headers/pci_ids.o 00:04:29.247 CXX test/cpp_headers/pipe.o 00:04:29.247 CXX test/cpp_headers/queue.o 00:04:29.247 CXX test/cpp_headers/reduce.o 00:04:29.247 CXX test/cpp_headers/rpc.o 00:04:29.504 CXX test/cpp_headers/scheduler.o 00:04:29.504 CXX test/cpp_headers/scsi.o 00:04:29.505 CXX test/cpp_headers/scsi_spec.o 00:04:29.505 CXX test/cpp_headers/sock.o 00:04:29.505 CXX test/cpp_headers/stdinc.o 00:04:29.505 CXX test/cpp_headers/string.o 00:04:29.505 LINK cuse 00:04:29.505 CXX test/cpp_headers/thread.o 00:04:29.505 CXX test/cpp_headers/trace.o 00:04:29.505 CXX test/cpp_headers/trace_parser.o 00:04:29.505 CXX test/cpp_headers/tree.o 00:04:29.763 CXX test/cpp_headers/ublk.o 00:04:29.763 CXX test/cpp_headers/util.o 00:04:29.763 CXX test/cpp_headers/uuid.o 00:04:29.763 CXX test/cpp_headers/version.o 00:04:29.763 CXX test/cpp_headers/vfio_user_pci.o 00:04:29.763 CXX test/cpp_headers/vfio_user_spec.o 00:04:29.763 CXX test/cpp_headers/vhost.o 00:04:29.763 CXX test/cpp_headers/vmd.o 00:04:29.763 CXX test/cpp_headers/xor.o 00:04:29.763 CXX test/cpp_headers/zipf.o 00:04:33.047 LINK esnap 00:04:33.047 00:04:33.047 real 1m28.797s 00:04:33.047 user 8m19.007s 00:04:33.047 sys 1m33.904s 00:04:33.047 14:09:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:33.047 14:09:57 make -- common/autotest_common.sh@10 -- $ set +x 00:04:33.047 ************************************ 00:04:33.047 END TEST make 00:04:33.047 ************************************ 00:04:33.047 14:09:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:33.047 14:09:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:33.047 14:09:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:33.047 14:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.047 14:09:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:33.047 14:09:57 -- pm/common@44 -- $ pid=5304 00:04:33.047 14:09:57 -- pm/common@50 -- $ kill -TERM 5304 00:04:33.047 14:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.047 14:09:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:33.047 14:09:57 -- pm/common@44 -- $ pid=5305 00:04:33.048 14:09:57 -- pm/common@50 -- $ kill -TERM 5305 00:04:33.048 14:09:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:33.048 14:09:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:33.306 14:09:57 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.306 14:09:57 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.306 14:09:57 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.306 14:09:57 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.306 14:09:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.306 14:09:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.306 14:09:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.306 14:09:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.306 14:09:57 -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.306 14:09:57 -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.306 14:09:57 -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.306 14:09:57 -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.306 14:09:57 -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.306 14:09:57 -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.306 14:09:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.306 14:09:57 -- scripts/common.sh@344 -- # case "$op" in 00:04:33.306 14:09:57 -- scripts/common.sh@345 -- # : 1 00:04:33.306 14:09:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.306 14:09:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.306 14:09:57 -- scripts/common.sh@365 -- # decimal 1 00:04:33.306 14:09:57 -- scripts/common.sh@353 -- # local d=1 00:04:33.306 14:09:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.306 14:09:57 -- scripts/common.sh@355 -- # echo 1 00:04:33.306 14:09:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.306 14:09:57 -- scripts/common.sh@366 -- # decimal 2 00:04:33.306 14:09:57 -- scripts/common.sh@353 -- # local d=2 00:04:33.306 14:09:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.307 14:09:58 -- scripts/common.sh@355 -- # echo 2 00:04:33.307 14:09:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.307 14:09:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.307 14:09:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.307 14:09:58 -- scripts/common.sh@368 -- # return 0 00:04:33.307 14:09:58 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.307 14:09:58 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.307 --rc genhtml_branch_coverage=1 00:04:33.307 --rc genhtml_function_coverage=1 00:04:33.307 --rc genhtml_legend=1 00:04:33.307 --rc geninfo_all_blocks=1 00:04:33.307 --rc geninfo_unexecuted_blocks=1 00:04:33.307 00:04:33.307 ' 00:04:33.307 14:09:58 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.307 --rc genhtml_branch_coverage=1 00:04:33.307 --rc genhtml_function_coverage=1 00:04:33.307 --rc genhtml_legend=1 00:04:33.307 --rc geninfo_all_blocks=1 00:04:33.307 --rc geninfo_unexecuted_blocks=1 00:04:33.307 00:04:33.307 ' 00:04:33.307 14:09:58 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.307 --rc genhtml_branch_coverage=1 00:04:33.307 --rc genhtml_function_coverage=1 00:04:33.307 --rc genhtml_legend=1 00:04:33.307 --rc geninfo_all_blocks=1 00:04:33.307 --rc geninfo_unexecuted_blocks=1 00:04:33.307 00:04:33.307 ' 00:04:33.307 14:09:58 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.307 --rc genhtml_branch_coverage=1 00:04:33.307 --rc genhtml_function_coverage=1 00:04:33.307 --rc genhtml_legend=1 00:04:33.307 --rc geninfo_all_blocks=1 00:04:33.307 --rc geninfo_unexecuted_blocks=1 00:04:33.307 00:04:33.307 ' 00:04:33.307 14:09:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.307 14:09:58 -- nvmf/common.sh@7 -- # uname -s 00:04:33.307 14:09:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.307 14:09:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.307 14:09:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.307 14:09:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.307 14:09:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.307 14:09:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.307 14:09:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.307 14:09:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.307 14:09:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.307 14:09:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.307 14:09:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:04:33.307 14:09:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:04:33.307 14:09:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.307 14:09:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.307 14:09:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:33.307 14:09:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.307 14:09:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.307 14:09:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:33.307 14:09:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.307 14:09:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.307 14:09:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.307 14:09:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.307 14:09:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.307 14:09:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.307 14:09:58 -- paths/export.sh@5 -- # export PATH 00:04:33.307 14:09:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.307 14:09:58 -- nvmf/common.sh@51 -- # : 0 00:04:33.307 14:09:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:33.307 14:09:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:33.307 14:09:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.307 14:09:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.307 14:09:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.307 14:09:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:33.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:33.307 14:09:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:33.307 14:09:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:33.307 14:09:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:33.307 14:09:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:33.307 14:09:58 -- spdk/autotest.sh@32 -- # uname -s 00:04:33.307 14:09:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:33.307 14:09:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:33.307 14:09:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.307 14:09:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:33.307 14:09:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.307 14:09:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.307 14:09:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.307 14:09:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.307 14:09:58 -- spdk/autotest.sh@48 -- # udevadm_pid=55590 00:04:33.307 14:09:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.307 14:09:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:33.307 14:09:58 -- pm/common@17 -- # local monitor 00:04:33.307 14:09:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.307 14:09:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.307 14:09:58 -- pm/common@25 -- # sleep 1 00:04:33.307 14:09:58 -- pm/common@21 -- # date +%s 00:04:33.307 14:09:58 -- pm/common@21 -- # date +%s 00:04:33.307 14:09:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733839798 00:04:33.307 14:09:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733839798 00:04:33.566 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733839798_collect-cpu-load.pm.log 00:04:33.566 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733839798_collect-vmstat.pm.log 00:04:34.537 14:09:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:34.537 14:09:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:34.537 14:09:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:34.537 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.537 14:09:59 -- spdk/autotest.sh@59 -- # create_test_list 00:04:34.537 14:09:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:34.537 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.537 14:09:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:34.537 14:09:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:34.537 14:09:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:34.537 14:09:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:34.537 14:09:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:34.537 14:09:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:34.537 14:09:59 -- common/autotest_common.sh@1457 -- # uname 00:04:34.537 14:09:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:34.537 14:09:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:34.537 14:09:59 -- common/autotest_common.sh@1477 -- # uname 00:04:34.537 14:09:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:34.537 14:09:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:34.537 14:09:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:34.537 lcov: LCOV version 1.15 00:04:34.537 14:09:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:49.417 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:49.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:04.294 14:10:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:04.294 14:10:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.294 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:05:04.294 14:10:28 -- spdk/autotest.sh@78 -- # rm -f 00:05:04.294 14:10:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.294 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.294 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.294 14:10:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:04.294 14:10:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:04.294 14:10:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:04.294 14:10:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:04.294 14:10:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:04.294 14:10:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:04.294 14:10:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.294 14:10:28 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:04.294 14:10:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.294 14:10:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:04.294 14:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:04.295 14:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:04.295 14:10:28 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:04.295 14:10:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.295 14:10:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:04.295 14:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:04.295 14:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.295 14:10:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:04.295 14:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:04.295 14:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:04.295 14:10:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:04.295 14:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:04.295 14:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.295 14:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:04.295 14:10:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:04.295 14:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.295 14:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.295 14:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:04.295 14:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:04.295 14:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.295 No valid GPT data, bailing 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # pt= 00:05:04.295 14:10:28 -- scripts/common.sh@395 -- # return 1 00:05:04.295 14:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.295 1+0 records in 00:05:04.295 1+0 records out 00:05:04.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00357833 s, 293 MB/s 00:05:04.295 14:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.295 14:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.295 14:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:04.295 14:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:04.295 14:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.295 No valid GPT data, bailing 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # pt= 00:05:04.295 14:10:28 -- scripts/common.sh@395 -- # return 1 00:05:04.295 14:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.295 1+0 records in 00:05:04.295 1+0 records out 00:05:04.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371815 s, 282 MB/s 00:05:04.295 14:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.295 14:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.295 14:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:04.295 14:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:04.295 14:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:04.295 No valid GPT data, bailing 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:04.295 14:10:28 -- scripts/common.sh@394 -- # pt= 00:05:04.295 14:10:28 -- scripts/common.sh@395 -- # return 1 00:05:04.295 14:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:04.295 1+0 records in 00:05:04.295 1+0 records out 00:05:04.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398127 s, 263 MB/s 00:05:04.295 14:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.295 14:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.295 14:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:04.295 14:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:04.295 14:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:04.295 No valid GPT data, bailing 00:05:04.295 14:10:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:04.295 14:10:29 -- scripts/common.sh@394 -- # pt= 00:05:04.295 14:10:29 -- scripts/common.sh@395 -- # return 1 00:05:04.295 14:10:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:04.295 1+0 records in 00:05:04.295 1+0 records out 00:05:04.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043533 s, 241 MB/s 00:05:04.295 14:10:29 -- spdk/autotest.sh@105 -- # sync 00:05:04.554 14:10:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.554 14:10:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.554 14:10:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:06.457 14:10:31 -- spdk/autotest.sh@111 -- # uname -s 00:05:06.457 14:10:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:06.457 14:10:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:06.457 14:10:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.024 Hugepages 00:05:07.024 node hugesize free / total 00:05:07.024 node0 1048576kB 0 / 0 00:05:07.024 node0 2048kB 0 / 0 00:05:07.024 00:05:07.024 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.024 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.024 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:07.283 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:07.283 14:10:31 -- spdk/autotest.sh@117 -- # uname -s 00:05:07.283 14:10:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:07.283 14:10:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:07.283 14:10:31 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.850 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.109 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.109 14:10:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:09.044 14:10:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:09.044 14:10:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:09.044 14:10:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.044 14:10:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:09.044 14:10:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:09.044 14:10:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:09.044 14:10:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.044 14:10:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.044 14:10:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:09.044 14:10:33 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:09.044 14:10:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:09.044 14:10:33 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.611 Waiting for block devices as requested 00:05:09.611 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.611 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.611 14:10:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.611 14:10:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:09.611 14:10:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:09.611 14:10:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:09.611 14:10:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:09.611 14:10:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.611 14:10:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.611 14:10:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.611 14:10:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.611 14:10:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.611 14:10:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.611 14:10:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:09.611 14:10:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.611 14:10:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.611 14:10:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.611 14:10:34 -- common/autotest_common.sh@1543 -- # continue 00:05:09.611 14:10:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:09.611 14:10:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:09.611 14:10:34 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:09.611 14:10:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:09.870 14:10:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:09.870 14:10:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:09.870 14:10:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:09.870 14:10:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:09.870 14:10:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:09.870 14:10:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:09.870 14:10:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:09.870 14:10:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:09.870 14:10:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:09.870 14:10:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:09.870 14:10:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:09.870 14:10:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:09.870 14:10:34 -- common/autotest_common.sh@1543 -- # continue 00:05:09.870 14:10:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:09.870 14:10:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.870 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:09.870 14:10:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:09.870 14:10:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.870 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:09.870 14:10:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.443 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.704 14:10:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:10.704 14:10:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.704 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.704 14:10:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:10.704 14:10:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:10.704 14:10:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.704 14:10:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:10.704 14:10:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:10.704 14:10:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:10.704 14:10:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:10.704 14:10:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:10.704 14:10:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.704 14:10:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.704 14:10:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.704 14:10:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.704 14:10:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.704 14:10:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:10.704 14:10:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:10.704 14:10:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.704 14:10:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:10.704 14:10:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.704 14:10:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.704 14:10:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.704 14:10:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:10.704 14:10:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:10.704 14:10:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.704 14:10:35 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:10.704 14:10:35 -- common/autotest_common.sh@1572 -- # return 0 00:05:10.704 14:10:35 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:10.704 14:10:35 -- common/autotest_common.sh@1580 -- # return 0 00:05:10.704 14:10:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:10.704 14:10:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:10.704 14:10:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.704 14:10:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.704 14:10:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:10.704 14:10:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.704 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.704 14:10:35 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:10.704 14:10:35 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:10.704 14:10:35 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:10.704 14:10:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:10.704 14:10:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.704 14:10:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.704 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.704 ************************************ 00:05:10.704 START TEST env 00:05:10.704 ************************************ 00:05:10.704 14:10:35 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:10.962 * Looking for test storage... 00:05:10.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:10.962 14:10:35 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.962 14:10:35 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.962 14:10:35 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.962 14:10:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.962 14:10:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.962 14:10:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.962 14:10:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.962 14:10:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.962 14:10:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.962 14:10:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.962 14:10:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.962 14:10:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.962 14:10:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.962 14:10:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.962 14:10:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.962 14:10:35 env -- scripts/common.sh@344 -- # case "$op" in 00:05:10.962 14:10:35 env -- scripts/common.sh@345 -- # : 1 00:05:10.962 14:10:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.962 14:10:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.962 14:10:35 env -- scripts/common.sh@365 -- # decimal 1 00:05:10.962 14:10:35 env -- scripts/common.sh@353 -- # local d=1 00:05:10.962 14:10:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.962 14:10:35 env -- scripts/common.sh@355 -- # echo 1 00:05:10.963 14:10:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.963 14:10:35 env -- scripts/common.sh@366 -- # decimal 2 00:05:10.963 14:10:35 env -- scripts/common.sh@353 -- # local d=2 00:05:10.963 14:10:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.963 14:10:35 env -- scripts/common.sh@355 -- # echo 2 00:05:10.963 14:10:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.963 14:10:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.963 14:10:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.963 14:10:35 env -- scripts/common.sh@368 -- # return 0 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.963 --rc genhtml_branch_coverage=1 00:05:10.963 --rc genhtml_function_coverage=1 00:05:10.963 --rc genhtml_legend=1 00:05:10.963 --rc geninfo_all_blocks=1 00:05:10.963 --rc geninfo_unexecuted_blocks=1 00:05:10.963 00:05:10.963 ' 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.963 --rc genhtml_branch_coverage=1 00:05:10.963 --rc genhtml_function_coverage=1 00:05:10.963 --rc genhtml_legend=1 00:05:10.963 --rc geninfo_all_blocks=1 00:05:10.963 --rc geninfo_unexecuted_blocks=1 00:05:10.963 00:05:10.963 ' 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.963 --rc genhtml_branch_coverage=1 00:05:10.963 --rc genhtml_function_coverage=1 00:05:10.963 --rc genhtml_legend=1 00:05:10.963 --rc geninfo_all_blocks=1 00:05:10.963 --rc geninfo_unexecuted_blocks=1 00:05:10.963 00:05:10.963 ' 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.963 --rc genhtml_branch_coverage=1 00:05:10.963 --rc genhtml_function_coverage=1 00:05:10.963 --rc genhtml_legend=1 00:05:10.963 --rc geninfo_all_blocks=1 00:05:10.963 --rc geninfo_unexecuted_blocks=1 00:05:10.963 00:05:10.963 ' 00:05:10.963 14:10:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.963 14:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.963 14:10:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.963 ************************************ 00:05:10.963 START TEST env_memory 00:05:10.963 ************************************ 00:05:10.963 14:10:35 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:10.963 00:05:10.963 00:05:10.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.963 http://cunit.sourceforge.net/ 00:05:10.963 00:05:10.963 00:05:10.963 Suite: memory 00:05:10.963 Test: alloc and free memory map ...[2024-12-10 14:10:35.757854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.963 passed 00:05:10.963 Test: mem map translation ...[2024-12-10 14:10:35.788391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.963 [2024-12-10 14:10:35.788436] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.963 [2024-12-10 14:10:35.788497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.963 [2024-12-10 14:10:35.788508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.221 passed 00:05:11.221 Test: mem map registration ...[2024-12-10 14:10:35.852108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:11.221 [2024-12-10 14:10:35.852152] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:11.221 passed 00:05:11.221 Test: mem map adjacent registrations ...passed 00:05:11.221 00:05:11.221 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.221 suites 1 1 n/a 0 0 00:05:11.221 tests 4 4 4 0 0 00:05:11.221 asserts 152 152 152 0 n/a 00:05:11.221 00:05:11.221 Elapsed time = 0.212 seconds 00:05:11.221 00:05:11.221 real 0m0.228s 00:05:11.221 user 0m0.210s 00:05:11.221 sys 0m0.014s 00:05:11.221 14:10:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.221 14:10:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.221 ************************************ 00:05:11.221 END TEST env_memory 00:05:11.221 ************************************ 00:05:11.222 14:10:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.222 14:10:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.222 14:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.222 14:10:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.222 ************************************ 00:05:11.222 START TEST env_vtophys 00:05:11.222 ************************************ 00:05:11.222 14:10:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.222 EAL: lib.eal log level changed from notice to debug 00:05:11.222 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 1 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 2 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 3 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 4 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 5 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 6 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 7 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 8 as core 0 on socket 0 00:05:11.222 EAL: Detected lcore 9 as core 0 on socket 0 00:05:11.222 EAL: Maximum logical cores by configuration: 128 00:05:11.222 EAL: Detected CPU lcores: 10 00:05:11.222 EAL: Detected NUMA nodes: 1 00:05:11.222 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:11.222 EAL: Detected shared linkage of DPDK 00:05:11.222 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.222 EAL: Selected IOVA mode 'PA' 00:05:11.222 EAL: Probing VFIO support... 00:05:11.222 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.222 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:11.222 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.222 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.222 EAL: Setting up physically contiguous memory... 00:05:11.222 EAL: Setting maximum number of open files to 524288 00:05:11.222 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.222 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.222 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.222 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.222 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.222 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.222 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.222 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.222 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.222 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.222 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.222 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.222 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.222 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.222 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.222 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.222 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.222 EAL: Hugepages will be freed exactly as allocated. 00:05:11.222 EAL: No shared files mode enabled, IPC is disabled 00:05:11.222 EAL: No shared files mode enabled, IPC is disabled 00:05:11.480 EAL: TSC frequency is ~2200000 KHz 00:05:11.480 EAL: Main lcore 0 is ready (tid=7f61c420da00;cpuset=[0]) 00:05:11.480 EAL: Trying to obtain current memory policy. 00:05:11.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.480 EAL: Restoring previous memory policy: 0 00:05:11.480 EAL: request: mp_malloc_sync 00:05:11.480 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.481 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.481 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.481 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.481 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:11.481 00:05:11.481 00:05:11.481 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.481 http://cunit.sourceforge.net/ 00:05:11.481 00:05:11.481 00:05:11.481 Suite: components_suite 00:05:11.481 Test: vtophys_malloc_test ...passed 00:05:11.481 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.481 EAL: Trying to obtain current memory policy. 00:05:11.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.481 EAL: Restoring previous memory policy: 4 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.481 EAL: request: mp_malloc_sync 00:05:11.481 EAL: No shared files mode enabled, IPC is disabled 00:05:11.481 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.481 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.740 EAL: request: mp_malloc_sync 00:05:11.740 EAL: No shared files mode enabled, IPC is disabled 00:05:11.740 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.740 EAL: Trying to obtain current memory policy. 00:05:11.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.740 EAL: Restoring previous memory policy: 4 00:05:11.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.740 EAL: request: mp_malloc_sync 00:05:11.740 EAL: No shared files mode enabled, IPC is disabled 00:05:11.740 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.740 EAL: request: mp_malloc_sync 00:05:11.740 EAL: No shared files mode enabled, IPC is disabled 00:05:11.740 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.740 EAL: Trying to obtain current memory policy. 00:05:11.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.999 EAL: Restoring previous memory policy: 4 00:05:11.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.999 EAL: request: mp_malloc_sync 00:05:11.999 EAL: No shared files mode enabled, IPC is disabled 00:05:11.999 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.258 passed 00:05:12.258 00:05:12.258 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.258 suites 1 1 n/a 0 0 00:05:12.258 tests 2 2 2 0 0 00:05:12.258 asserts 5400 5400 5400 0 n/a 00:05:12.258 00:05:12.258 Elapsed time = 0.677 seconds 00:05:12.258 EAL: request: mp_malloc_sync 00:05:12.258 EAL: No shared files mode enabled, IPC is disabled 00:05:12.258 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.258 EAL: request: mp_malloc_sync 00:05:12.258 EAL: No shared files mode enabled, IPC is disabled 00:05:12.258 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.258 EAL: No shared files mode enabled, IPC is disabled 00:05:12.258 EAL: No shared files mode enabled, IPC is disabled 00:05:12.258 EAL: No shared files mode enabled, IPC is disabled 00:05:12.258 00:05:12.258 real 0m0.879s 00:05:12.258 user 0m0.458s 00:05:12.258 sys 0m0.291s 00:05:12.258 14:10:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.258 14:10:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.258 ************************************ 00:05:12.258 END TEST env_vtophys 00:05:12.258 ************************************ 00:05:12.258 14:10:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.258 14:10:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.258 14:10:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.258 14:10:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.258 ************************************ 00:05:12.258 START TEST env_pci 00:05:12.258 ************************************ 00:05:12.258 14:10:36 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.258 00:05:12.258 00:05:12.258 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.258 http://cunit.sourceforge.net/ 00:05:12.258 00:05:12.258 00:05:12.258 Suite: pci 00:05:12.258 Test: pci_hook ...[2024-12-10 14:10:36.936914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57772 has claimed it 00:05:12.258 passed 00:05:12.258 00:05:12.258 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.258 suites 1 1 n/a 0 0 00:05:12.258 tests 1 1 1 0 0 00:05:12.258 asserts 25 25 25 0 n/a 00:05:12.258 00:05:12.258 Elapsed time = 0.002 seconds 00:05:12.258 EAL: Cannot find device (10000:00:01.0) 00:05:12.258 EAL: Failed to attach device on primary process 00:05:12.258 00:05:12.258 real 0m0.022s 00:05:12.258 user 0m0.015s 00:05:12.258 sys 0m0.007s 00:05:12.258 14:10:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.258 14:10:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.258 ************************************ 00:05:12.258 END TEST env_pci 00:05:12.258 ************************************ 00:05:12.258 14:10:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.258 14:10:36 env -- env/env.sh@15 -- # uname 00:05:12.258 14:10:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.258 14:10:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.258 14:10:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.258 14:10:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:12.258 14:10:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.258 14:10:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.258 ************************************ 00:05:12.258 START TEST env_dpdk_post_init 00:05:12.258 ************************************ 00:05:12.258 14:10:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.258 EAL: Detected CPU lcores: 10 00:05:12.258 EAL: Detected NUMA nodes: 1 00:05:12.258 EAL: Detected shared linkage of DPDK 00:05:12.258 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.258 EAL: Selected IOVA mode 'PA' 00:05:12.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.517 Starting DPDK initialization... 00:05:12.517 Starting SPDK post initialization... 00:05:12.517 SPDK NVMe probe 00:05:12.517 Attaching to 0000:00:10.0 00:05:12.517 Attaching to 0000:00:11.0 00:05:12.517 Attached to 0000:00:10.0 00:05:12.517 Attached to 0000:00:11.0 00:05:12.517 Cleaning up... 00:05:12.517 00:05:12.517 real 0m0.185s 00:05:12.517 user 0m0.054s 00:05:12.517 sys 0m0.031s 00:05:12.517 14:10:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.517 14:10:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.517 ************************************ 00:05:12.517 END TEST env_dpdk_post_init 00:05:12.517 ************************************ 00:05:12.517 14:10:37 env -- env/env.sh@26 -- # uname 00:05:12.517 14:10:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.517 14:10:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.517 14:10:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.517 14:10:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.517 14:10:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.517 ************************************ 00:05:12.517 START TEST env_mem_callbacks 00:05:12.517 ************************************ 00:05:12.517 14:10:37 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.517 EAL: Detected CPU lcores: 10 00:05:12.517 EAL: Detected NUMA nodes: 1 00:05:12.517 EAL: Detected shared linkage of DPDK 00:05:12.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.517 EAL: Selected IOVA mode 'PA' 00:05:12.822 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.822 00:05:12.822 00:05:12.822 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.822 http://cunit.sourceforge.net/ 00:05:12.822 00:05:12.822 00:05:12.822 Suite: memory 00:05:12.822 Test: test ... 00:05:12.822 register 0x200000200000 2097152 00:05:12.822 malloc 3145728 00:05:12.822 register 0x200000400000 4194304 00:05:12.822 buf 0x200000500000 len 3145728 PASSED 00:05:12.822 malloc 64 00:05:12.822 buf 0x2000004fff40 len 64 PASSED 00:05:12.822 malloc 4194304 00:05:12.822 register 0x200000800000 6291456 00:05:12.822 buf 0x200000a00000 len 4194304 PASSED 00:05:12.822 free 0x200000500000 3145728 00:05:12.822 free 0x2000004fff40 64 00:05:12.822 unregister 0x200000400000 4194304 PASSED 00:05:12.822 free 0x200000a00000 4194304 00:05:12.822 unregister 0x200000800000 6291456 PASSED 00:05:12.822 malloc 8388608 00:05:12.822 register 0x200000400000 10485760 00:05:12.822 buf 0x200000600000 len 8388608 PASSED 00:05:12.822 free 0x200000600000 8388608 00:05:12.822 unregister 0x200000400000 10485760 PASSED 00:05:12.822 passed 00:05:12.822 00:05:12.822 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.822 suites 1 1 n/a 0 0 00:05:12.822 tests 1 1 1 0 0 00:05:12.822 asserts 15 15 15 0 n/a 00:05:12.822 00:05:12.822 Elapsed time = 0.008 seconds 00:05:12.822 00:05:12.822 real 0m0.140s 00:05:12.822 user 0m0.017s 00:05:12.822 sys 0m0.023s 00:05:12.822 14:10:37 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.822 14:10:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.822 ************************************ 00:05:12.822 END TEST env_mem_callbacks 00:05:12.822 ************************************ 00:05:12.822 ************************************ 00:05:12.822 END TEST env 00:05:12.822 ************************************ 00:05:12.822 00:05:12.822 real 0m1.907s 00:05:12.822 user 0m0.949s 00:05:12.822 sys 0m0.609s 00:05:12.822 14:10:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.822 14:10:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.822 14:10:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.822 14:10:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.822 14:10:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.822 14:10:37 -- common/autotest_common.sh@10 -- # set +x 00:05:12.822 ************************************ 00:05:12.822 START TEST rpc 00:05:12.822 ************************************ 00:05:12.822 14:10:37 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.822 * Looking for test storage... 00:05:12.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.822 14:10:37 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.822 14:10:37 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.822 14:10:37 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.103 14:10:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.103 14:10:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.103 14:10:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.103 14:10:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.103 14:10:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.103 14:10:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.103 14:10:37 rpc -- scripts/common.sh@345 -- # : 1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.103 14:10:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.103 14:10:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.103 14:10:37 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.103 14:10:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.103 14:10:37 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.103 14:10:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.103 14:10:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.103 14:10:37 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.103 --rc genhtml_branch_coverage=1 00:05:13.103 --rc genhtml_function_coverage=1 00:05:13.103 --rc genhtml_legend=1 00:05:13.103 --rc geninfo_all_blocks=1 00:05:13.103 --rc geninfo_unexecuted_blocks=1 00:05:13.103 00:05:13.103 ' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.103 --rc genhtml_branch_coverage=1 00:05:13.103 --rc genhtml_function_coverage=1 00:05:13.103 --rc genhtml_legend=1 00:05:13.103 --rc geninfo_all_blocks=1 00:05:13.103 --rc geninfo_unexecuted_blocks=1 00:05:13.103 00:05:13.103 ' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.103 --rc genhtml_branch_coverage=1 00:05:13.103 --rc genhtml_function_coverage=1 00:05:13.103 --rc genhtml_legend=1 00:05:13.103 --rc geninfo_all_blocks=1 00:05:13.103 --rc geninfo_unexecuted_blocks=1 00:05:13.103 00:05:13.103 ' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.103 --rc genhtml_branch_coverage=1 00:05:13.103 --rc genhtml_function_coverage=1 00:05:13.103 --rc genhtml_legend=1 00:05:13.103 --rc geninfo_all_blocks=1 00:05:13.103 --rc geninfo_unexecuted_blocks=1 00:05:13.103 00:05:13.103 ' 00:05:13.103 14:10:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57895 00:05:13.103 14:10:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.103 14:10:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.103 14:10:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57895 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 57895 ']' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.103 14:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.103 [2024-12-10 14:10:37.727651] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:13.103 [2024-12-10 14:10:37.727772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:05:13.103 [2024-12-10 14:10:37.879682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.103 [2024-12-10 14:10:37.918343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.103 [2024-12-10 14:10:37.918408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57895' to capture a snapshot of events at runtime. 00:05:13.103 [2024-12-10 14:10:37.918421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.103 [2024-12-10 14:10:37.918431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.103 [2024-12-10 14:10:37.918440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57895 for offline analysis/debug. 00:05:13.103 [2024-12-10 14:10:37.918847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.362 [2024-12-10 14:10:37.964866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.362 14:10:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.362 14:10:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.362 14:10:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.362 14:10:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.362 14:10:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.362 14:10:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.362 14:10:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.362 14:10:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.362 14:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.362 ************************************ 00:05:13.362 START TEST rpc_integrity 00:05:13.362 ************************************ 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.362 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.362 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.621 { 00:05:13.621 "name": "Malloc0", 00:05:13.621 "aliases": [ 00:05:13.621 "6b93a2a8-c28f-404a-8e35-f94e79e26809" 00:05:13.621 ], 00:05:13.621 "product_name": "Malloc disk", 00:05:13.621 "block_size": 512, 00:05:13.621 "num_blocks": 16384, 00:05:13.621 "uuid": "6b93a2a8-c28f-404a-8e35-f94e79e26809", 00:05:13.621 "assigned_rate_limits": { 00:05:13.621 "rw_ios_per_sec": 0, 00:05:13.621 "rw_mbytes_per_sec": 0, 00:05:13.621 "r_mbytes_per_sec": 0, 00:05:13.621 "w_mbytes_per_sec": 0 00:05:13.621 }, 00:05:13.621 "claimed": false, 00:05:13.621 "zoned": false, 00:05:13.621 "supported_io_types": { 00:05:13.621 "read": true, 00:05:13.621 "write": true, 00:05:13.621 "unmap": true, 00:05:13.621 "flush": true, 00:05:13.621 "reset": true, 00:05:13.621 "nvme_admin": false, 00:05:13.621 "nvme_io": false, 00:05:13.621 "nvme_io_md": false, 00:05:13.621 "write_zeroes": true, 00:05:13.621 "zcopy": true, 00:05:13.621 "get_zone_info": false, 00:05:13.621 "zone_management": false, 00:05:13.621 "zone_append": false, 00:05:13.621 "compare": false, 00:05:13.621 "compare_and_write": false, 00:05:13.621 "abort": true, 00:05:13.621 "seek_hole": false, 00:05:13.621 "seek_data": false, 00:05:13.621 "copy": true, 00:05:13.621 "nvme_iov_md": false 00:05:13.621 }, 00:05:13.621 "memory_domains": [ 00:05:13.621 { 00:05:13.621 "dma_device_id": "system", 00:05:13.621 "dma_device_type": 1 00:05:13.621 }, 00:05:13.621 { 00:05:13.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.621 "dma_device_type": 2 00:05:13.621 } 00:05:13.621 ], 00:05:13.621 "driver_specific": {} 00:05:13.621 } 00:05:13.621 ]' 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.621 [2024-12-10 14:10:38.269214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.621 [2024-12-10 14:10:38.269260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.621 [2024-12-10 14:10:38.269278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x172ecb0 00:05:13.621 [2024-12-10 14:10:38.269288] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.621 [2024-12-10 14:10:38.270889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.621 [2024-12-10 14:10:38.270923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.621 Passthru0 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.621 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.621 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.621 { 00:05:13.621 "name": "Malloc0", 00:05:13.621 "aliases": [ 00:05:13.621 "6b93a2a8-c28f-404a-8e35-f94e79e26809" 00:05:13.621 ], 00:05:13.621 "product_name": "Malloc disk", 00:05:13.621 "block_size": 512, 00:05:13.621 "num_blocks": 16384, 00:05:13.621 "uuid": "6b93a2a8-c28f-404a-8e35-f94e79e26809", 00:05:13.621 "assigned_rate_limits": { 00:05:13.621 "rw_ios_per_sec": 0, 00:05:13.621 "rw_mbytes_per_sec": 0, 00:05:13.621 "r_mbytes_per_sec": 0, 00:05:13.621 "w_mbytes_per_sec": 0 00:05:13.621 }, 00:05:13.621 "claimed": true, 00:05:13.621 "claim_type": "exclusive_write", 00:05:13.621 "zoned": false, 00:05:13.621 "supported_io_types": { 00:05:13.621 "read": true, 00:05:13.621 "write": true, 00:05:13.621 "unmap": true, 00:05:13.621 "flush": true, 00:05:13.621 "reset": true, 00:05:13.621 "nvme_admin": false, 00:05:13.621 "nvme_io": false, 00:05:13.621 "nvme_io_md": false, 00:05:13.621 "write_zeroes": true, 00:05:13.621 "zcopy": true, 00:05:13.621 "get_zone_info": false, 00:05:13.621 "zone_management": false, 00:05:13.621 "zone_append": false, 00:05:13.621 "compare": false, 00:05:13.622 "compare_and_write": false, 00:05:13.622 "abort": true, 00:05:13.622 "seek_hole": false, 00:05:13.622 "seek_data": false, 00:05:13.622 "copy": true, 00:05:13.622 "nvme_iov_md": false 00:05:13.622 }, 00:05:13.622 "memory_domains": [ 00:05:13.622 { 00:05:13.622 "dma_device_id": "system", 00:05:13.622 "dma_device_type": 1 00:05:13.622 }, 00:05:13.622 { 00:05:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.622 "dma_device_type": 2 00:05:13.622 } 00:05:13.622 ], 00:05:13.622 "driver_specific": {} 00:05:13.622 }, 00:05:13.622 { 00:05:13.622 "name": "Passthru0", 00:05:13.622 "aliases": [ 00:05:13.622 "817f6dda-187e-536f-a2e8-50960b1960ea" 00:05:13.622 ], 00:05:13.622 "product_name": "passthru", 00:05:13.622 "block_size": 512, 00:05:13.622 "num_blocks": 16384, 00:05:13.622 "uuid": "817f6dda-187e-536f-a2e8-50960b1960ea", 00:05:13.622 "assigned_rate_limits": { 00:05:13.622 "rw_ios_per_sec": 0, 00:05:13.622 "rw_mbytes_per_sec": 0, 00:05:13.622 "r_mbytes_per_sec": 0, 00:05:13.622 "w_mbytes_per_sec": 0 00:05:13.622 }, 00:05:13.622 "claimed": false, 00:05:13.622 "zoned": false, 00:05:13.622 "supported_io_types": { 00:05:13.622 "read": true, 00:05:13.622 "write": true, 00:05:13.622 "unmap": true, 00:05:13.622 "flush": true, 00:05:13.622 "reset": true, 00:05:13.622 "nvme_admin": false, 00:05:13.622 "nvme_io": false, 00:05:13.622 "nvme_io_md": false, 00:05:13.622 "write_zeroes": true, 00:05:13.622 "zcopy": true, 00:05:13.622 "get_zone_info": false, 00:05:13.622 "zone_management": false, 00:05:13.622 "zone_append": false, 00:05:13.622 "compare": false, 00:05:13.622 "compare_and_write": false, 00:05:13.622 "abort": true, 00:05:13.622 "seek_hole": false, 00:05:13.622 "seek_data": false, 00:05:13.622 "copy": true, 00:05:13.622 "nvme_iov_md": false 00:05:13.622 }, 00:05:13.622 "memory_domains": [ 00:05:13.622 { 00:05:13.622 "dma_device_id": "system", 00:05:13.622 "dma_device_type": 1 00:05:13.622 }, 00:05:13.622 { 00:05:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.622 "dma_device_type": 2 00:05:13.622 } 00:05:13.622 ], 00:05:13.622 "driver_specific": { 00:05:13.622 "passthru": { 00:05:13.622 "name": "Passthru0", 00:05:13.622 "base_bdev_name": "Malloc0" 00:05:13.622 } 00:05:13.622 } 00:05:13.622 } 00:05:13.622 ]' 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.622 14:10:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.622 00:05:13.622 real 0m0.324s 00:05:13.622 user 0m0.215s 00:05:13.622 sys 0m0.043s 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.622 14:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.622 ************************************ 00:05:13.622 END TEST rpc_integrity 00:05:13.622 ************************************ 00:05:13.881 14:10:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 ************************************ 00:05:13.881 START TEST rpc_plugins 00:05:13.881 ************************************ 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.881 { 00:05:13.881 "name": "Malloc1", 00:05:13.881 "aliases": [ 00:05:13.881 "31a27cef-0f41-4116-81b7-354cff4a2123" 00:05:13.881 ], 00:05:13.881 "product_name": "Malloc disk", 00:05:13.881 "block_size": 4096, 00:05:13.881 "num_blocks": 256, 00:05:13.881 "uuid": "31a27cef-0f41-4116-81b7-354cff4a2123", 00:05:13.881 "assigned_rate_limits": { 00:05:13.881 "rw_ios_per_sec": 0, 00:05:13.881 "rw_mbytes_per_sec": 0, 00:05:13.881 "r_mbytes_per_sec": 0, 00:05:13.881 "w_mbytes_per_sec": 0 00:05:13.881 }, 00:05:13.881 "claimed": false, 00:05:13.881 "zoned": false, 00:05:13.881 "supported_io_types": { 00:05:13.881 "read": true, 00:05:13.881 "write": true, 00:05:13.881 "unmap": true, 00:05:13.881 "flush": true, 00:05:13.881 "reset": true, 00:05:13.881 "nvme_admin": false, 00:05:13.881 "nvme_io": false, 00:05:13.881 "nvme_io_md": false, 00:05:13.881 "write_zeroes": true, 00:05:13.881 "zcopy": true, 00:05:13.881 "get_zone_info": false, 00:05:13.881 "zone_management": false, 00:05:13.881 "zone_append": false, 00:05:13.881 "compare": false, 00:05:13.881 "compare_and_write": false, 00:05:13.881 "abort": true, 00:05:13.881 "seek_hole": false, 00:05:13.881 "seek_data": false, 00:05:13.881 "copy": true, 00:05:13.881 "nvme_iov_md": false 00:05:13.881 }, 00:05:13.881 "memory_domains": [ 00:05:13.881 { 00:05:13.881 "dma_device_id": "system", 00:05:13.881 "dma_device_type": 1 00:05:13.881 }, 00:05:13.881 { 00:05:13.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.881 "dma_device_type": 2 00:05:13.881 } 00:05:13.881 ], 00:05:13.881 "driver_specific": {} 00:05:13.881 } 00:05:13.881 ]' 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.881 14:10:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.881 00:05:13.881 real 0m0.153s 00:05:13.881 user 0m0.103s 00:05:13.881 sys 0m0.014s 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.881 ************************************ 00:05:13.881 14:10:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 END TEST rpc_plugins 00:05:13.881 ************************************ 00:05:13.881 14:10:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.881 14:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 ************************************ 00:05:13.881 START TEST rpc_trace_cmd_test 00:05:13.881 ************************************ 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.881 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.881 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57895", 00:05:13.881 "tpoint_group_mask": "0x8", 00:05:13.881 "iscsi_conn": { 00:05:13.881 "mask": "0x2", 00:05:13.881 "tpoint_mask": "0x0" 00:05:13.881 }, 00:05:13.881 "scsi": { 00:05:13.881 "mask": "0x4", 00:05:13.881 "tpoint_mask": "0x0" 00:05:13.881 }, 00:05:13.881 "bdev": { 00:05:13.881 "mask": "0x8", 00:05:13.881 "tpoint_mask": "0xffffffffffffffff" 00:05:13.881 }, 00:05:13.882 "nvmf_rdma": { 00:05:13.882 "mask": "0x10", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "nvmf_tcp": { 00:05:13.882 "mask": "0x20", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "ftl": { 00:05:13.882 "mask": "0x40", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "blobfs": { 00:05:13.882 "mask": "0x80", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "dsa": { 00:05:13.882 "mask": "0x200", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "thread": { 00:05:13.882 "mask": "0x400", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "nvme_pcie": { 00:05:13.882 "mask": "0x800", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "iaa": { 00:05:13.882 "mask": "0x1000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "nvme_tcp": { 00:05:13.882 "mask": "0x2000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "bdev_nvme": { 00:05:13.882 "mask": "0x4000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "sock": { 00:05:13.882 "mask": "0x8000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "blob": { 00:05:13.882 "mask": "0x10000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "bdev_raid": { 00:05:13.882 "mask": "0x20000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 }, 00:05:13.882 "scheduler": { 00:05:13.882 "mask": "0x40000", 00:05:13.882 "tpoint_mask": "0x0" 00:05:13.882 } 00:05:13.882 }' 00:05:13.882 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.140 00:05:14.140 real 0m0.286s 00:05:14.140 user 0m0.253s 00:05:14.140 sys 0m0.022s 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.140 ************************************ 00:05:14.140 END TEST rpc_trace_cmd_test 00:05:14.140 ************************************ 00:05:14.140 14:10:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 14:10:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.400 14:10:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.400 14:10:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.400 14:10:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.400 14:10:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.400 14:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 ************************************ 00:05:14.400 START TEST rpc_daemon_integrity 00:05:14.400 ************************************ 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.400 { 00:05:14.400 "name": "Malloc2", 00:05:14.400 "aliases": [ 00:05:14.400 "a38da85b-d7b4-4cae-8a89-c6e4dc59f57b" 00:05:14.400 ], 00:05:14.400 "product_name": "Malloc disk", 00:05:14.400 "block_size": 512, 00:05:14.400 "num_blocks": 16384, 00:05:14.400 "uuid": "a38da85b-d7b4-4cae-8a89-c6e4dc59f57b", 00:05:14.400 "assigned_rate_limits": { 00:05:14.400 "rw_ios_per_sec": 0, 00:05:14.400 "rw_mbytes_per_sec": 0, 00:05:14.400 "r_mbytes_per_sec": 0, 00:05:14.400 "w_mbytes_per_sec": 0 00:05:14.400 }, 00:05:14.400 "claimed": false, 00:05:14.400 "zoned": false, 00:05:14.400 "supported_io_types": { 00:05:14.400 "read": true, 00:05:14.400 "write": true, 00:05:14.400 "unmap": true, 00:05:14.400 "flush": true, 00:05:14.400 "reset": true, 00:05:14.400 "nvme_admin": false, 00:05:14.400 "nvme_io": false, 00:05:14.400 "nvme_io_md": false, 00:05:14.400 "write_zeroes": true, 00:05:14.400 "zcopy": true, 00:05:14.400 "get_zone_info": false, 00:05:14.400 "zone_management": false, 00:05:14.400 "zone_append": false, 00:05:14.400 "compare": false, 00:05:14.400 "compare_and_write": false, 00:05:14.400 "abort": true, 00:05:14.400 "seek_hole": false, 00:05:14.400 "seek_data": false, 00:05:14.400 "copy": true, 00:05:14.400 "nvme_iov_md": false 00:05:14.400 }, 00:05:14.400 "memory_domains": [ 00:05:14.400 { 00:05:14.400 "dma_device_id": "system", 00:05:14.400 "dma_device_type": 1 00:05:14.400 }, 00:05:14.400 { 00:05:14.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.400 "dma_device_type": 2 00:05:14.400 } 00:05:14.400 ], 00:05:14.400 "driver_specific": {} 00:05:14.400 } 00:05:14.400 ]' 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 [2024-12-10 14:10:39.181629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.400 [2024-12-10 14:10:39.181680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.400 [2024-12-10 14:10:39.181696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1792270 00:05:14.400 [2024-12-10 14:10:39.181704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.400 [2024-12-10 14:10:39.183056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.400 [2024-12-10 14:10:39.183090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.400 Passthru0 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.400 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.400 { 00:05:14.400 "name": "Malloc2", 00:05:14.400 "aliases": [ 00:05:14.400 "a38da85b-d7b4-4cae-8a89-c6e4dc59f57b" 00:05:14.400 ], 00:05:14.400 "product_name": "Malloc disk", 00:05:14.400 "block_size": 512, 00:05:14.400 "num_blocks": 16384, 00:05:14.400 "uuid": "a38da85b-d7b4-4cae-8a89-c6e4dc59f57b", 00:05:14.400 "assigned_rate_limits": { 00:05:14.400 "rw_ios_per_sec": 0, 00:05:14.400 "rw_mbytes_per_sec": 0, 00:05:14.400 "r_mbytes_per_sec": 0, 00:05:14.400 "w_mbytes_per_sec": 0 00:05:14.400 }, 00:05:14.400 "claimed": true, 00:05:14.400 "claim_type": "exclusive_write", 00:05:14.400 "zoned": false, 00:05:14.400 "supported_io_types": { 00:05:14.400 "read": true, 00:05:14.400 "write": true, 00:05:14.400 "unmap": true, 00:05:14.400 "flush": true, 00:05:14.400 "reset": true, 00:05:14.400 "nvme_admin": false, 00:05:14.400 "nvme_io": false, 00:05:14.400 "nvme_io_md": false, 00:05:14.400 "write_zeroes": true, 00:05:14.400 "zcopy": true, 00:05:14.400 "get_zone_info": false, 00:05:14.400 "zone_management": false, 00:05:14.400 "zone_append": false, 00:05:14.400 "compare": false, 00:05:14.400 "compare_and_write": false, 00:05:14.400 "abort": true, 00:05:14.400 "seek_hole": false, 00:05:14.400 "seek_data": false, 00:05:14.400 "copy": true, 00:05:14.400 "nvme_iov_md": false 00:05:14.400 }, 00:05:14.400 "memory_domains": [ 00:05:14.400 { 00:05:14.400 "dma_device_id": "system", 00:05:14.400 "dma_device_type": 1 00:05:14.400 }, 00:05:14.400 { 00:05:14.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.400 "dma_device_type": 2 00:05:14.400 } 00:05:14.400 ], 00:05:14.400 "driver_specific": {} 00:05:14.400 }, 00:05:14.400 { 00:05:14.400 "name": "Passthru0", 00:05:14.400 "aliases": [ 00:05:14.400 "e6a6462f-f3ff-542d-b8e7-0d7decd24eaf" 00:05:14.401 ], 00:05:14.401 "product_name": "passthru", 00:05:14.401 "block_size": 512, 00:05:14.401 "num_blocks": 16384, 00:05:14.401 "uuid": "e6a6462f-f3ff-542d-b8e7-0d7decd24eaf", 00:05:14.401 "assigned_rate_limits": { 00:05:14.401 "rw_ios_per_sec": 0, 00:05:14.401 "rw_mbytes_per_sec": 0, 00:05:14.401 "r_mbytes_per_sec": 0, 00:05:14.401 "w_mbytes_per_sec": 0 00:05:14.401 }, 00:05:14.401 "claimed": false, 00:05:14.401 "zoned": false, 00:05:14.401 "supported_io_types": { 00:05:14.401 "read": true, 00:05:14.401 "write": true, 00:05:14.401 "unmap": true, 00:05:14.401 "flush": true, 00:05:14.401 "reset": true, 00:05:14.401 "nvme_admin": false, 00:05:14.401 "nvme_io": false, 00:05:14.401 "nvme_io_md": false, 00:05:14.401 "write_zeroes": true, 00:05:14.401 "zcopy": true, 00:05:14.401 "get_zone_info": false, 00:05:14.401 "zone_management": false, 00:05:14.401 "zone_append": false, 00:05:14.401 "compare": false, 00:05:14.401 "compare_and_write": false, 00:05:14.401 "abort": true, 00:05:14.401 "seek_hole": false, 00:05:14.401 "seek_data": false, 00:05:14.401 "copy": true, 00:05:14.401 "nvme_iov_md": false 00:05:14.401 }, 00:05:14.401 "memory_domains": [ 00:05:14.401 { 00:05:14.401 "dma_device_id": "system", 00:05:14.401 "dma_device_type": 1 00:05:14.401 }, 00:05:14.401 { 00:05:14.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.401 "dma_device_type": 2 00:05:14.401 } 00:05:14.401 ], 00:05:14.401 "driver_specific": { 00:05:14.401 "passthru": { 00:05:14.401 "name": "Passthru0", 00:05:14.401 "base_bdev_name": "Malloc2" 00:05:14.401 } 00:05:14.401 } 00:05:14.401 } 00:05:14.401 ]' 00:05:14.401 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.659 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.660 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.660 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.660 14:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.660 00:05:14.660 real 0m0.326s 00:05:14.660 user 0m0.222s 00:05:14.660 sys 0m0.040s 00:05:14.660 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.660 ************************************ 00:05:14.660 14:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.660 END TEST rpc_daemon_integrity 00:05:14.660 ************************************ 00:05:14.660 14:10:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.660 14:10:39 rpc -- rpc/rpc.sh@84 -- # killprocess 57895 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 57895 ']' 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@958 -- # kill -0 57895 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57895 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.660 killing process with pid 57895 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57895' 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@973 -- # kill 57895 00:05:14.660 14:10:39 rpc -- common/autotest_common.sh@978 -- # wait 57895 00:05:14.918 ************************************ 00:05:14.918 END TEST rpc 00:05:14.918 ************************************ 00:05:14.918 00:05:14.918 real 0m2.183s 00:05:14.918 user 0m2.989s 00:05:14.918 sys 0m0.548s 00:05:14.918 14:10:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.918 14:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.918 14:10:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.918 14:10:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.918 14:10:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.918 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:05:14.918 ************************************ 00:05:14.918 START TEST skip_rpc 00:05:14.918 ************************************ 00:05:14.918 14:10:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.177 * Looking for test storage... 00:05:15.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.177 14:10:39 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.177 14:10:39 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.177 14:10:39 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.177 14:10:39 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.177 14:10:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.178 14:10:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.178 --rc genhtml_branch_coverage=1 00:05:15.178 --rc genhtml_function_coverage=1 00:05:15.178 --rc genhtml_legend=1 00:05:15.178 --rc geninfo_all_blocks=1 00:05:15.178 --rc geninfo_unexecuted_blocks=1 00:05:15.178 00:05:15.178 ' 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.178 --rc genhtml_branch_coverage=1 00:05:15.178 --rc genhtml_function_coverage=1 00:05:15.178 --rc genhtml_legend=1 00:05:15.178 --rc geninfo_all_blocks=1 00:05:15.178 --rc geninfo_unexecuted_blocks=1 00:05:15.178 00:05:15.178 ' 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.178 --rc genhtml_branch_coverage=1 00:05:15.178 --rc genhtml_function_coverage=1 00:05:15.178 --rc genhtml_legend=1 00:05:15.178 --rc geninfo_all_blocks=1 00:05:15.178 --rc geninfo_unexecuted_blocks=1 00:05:15.178 00:05:15.178 ' 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.178 --rc genhtml_branch_coverage=1 00:05:15.178 --rc genhtml_function_coverage=1 00:05:15.178 --rc genhtml_legend=1 00:05:15.178 --rc geninfo_all_blocks=1 00:05:15.178 --rc geninfo_unexecuted_blocks=1 00:05:15.178 00:05:15.178 ' 00:05:15.178 14:10:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.178 14:10:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.178 14:10:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.178 14:10:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.178 ************************************ 00:05:15.178 START TEST skip_rpc 00:05:15.178 ************************************ 00:05:15.178 14:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:15.178 14:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58088 00:05:15.178 14:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.178 14:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.178 14:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.178 [2024-12-10 14:10:39.953124] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:15.178 [2024-12-10 14:10:39.953229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58088 ] 00:05:15.436 [2024-12-10 14:10:40.091251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.436 [2024-12-10 14:10:40.119535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.436 [2024-12-10 14:10:40.155753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58088 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58088 ']' 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58088 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58088 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.706 killing process with pid 58088 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58088' 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58088 00:05:20.706 14:10:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58088 00:05:20.706 00:05:20.706 real 0m5.268s 00:05:20.706 user 0m5.013s 00:05:20.706 sys 0m0.174s 00:05:20.706 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.706 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.706 ************************************ 00:05:20.706 END TEST skip_rpc 00:05:20.706 ************************************ 00:05:20.706 14:10:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.706 14:10:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.706 14:10:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.706 14:10:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.706 ************************************ 00:05:20.706 START TEST skip_rpc_with_json 00:05:20.706 ************************************ 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58175 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58175 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58175 ']' 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.706 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.706 [2024-12-10 14:10:45.286592] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:20.706 [2024-12-10 14:10:45.286691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:05:20.706 [2024-12-10 14:10:45.434625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.706 [2024-12-10 14:10:45.465583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.706 [2024-12-10 14:10:45.502555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.966 [2024-12-10 14:10:45.626159] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.966 request: 00:05:20.966 { 00:05:20.966 "trtype": "tcp", 00:05:20.966 "method": "nvmf_get_transports", 00:05:20.966 "req_id": 1 00:05:20.966 } 00:05:20.966 Got JSON-RPC error response 00:05:20.966 response: 00:05:20.966 { 00:05:20.966 "code": -19, 00:05:20.966 "message": "No such device" 00:05:20.966 } 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.966 [2024-12-10 14:10:45.638243] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.966 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.225 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.225 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.225 { 00:05:21.225 "subsystems": [ 00:05:21.225 { 00:05:21.225 "subsystem": "fsdev", 00:05:21.225 "config": [ 00:05:21.225 { 00:05:21.225 "method": "fsdev_set_opts", 00:05:21.225 "params": { 00:05:21.225 "fsdev_io_pool_size": 65535, 00:05:21.225 "fsdev_io_cache_size": 256 00:05:21.225 } 00:05:21.225 } 00:05:21.225 ] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "keyring", 00:05:21.225 "config": [] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "iobuf", 00:05:21.225 "config": [ 00:05:21.225 { 00:05:21.225 "method": "iobuf_set_options", 00:05:21.225 "params": { 00:05:21.225 "small_pool_count": 8192, 00:05:21.225 "large_pool_count": 1024, 00:05:21.225 "small_bufsize": 8192, 00:05:21.225 "large_bufsize": 135168, 00:05:21.225 "enable_numa": false 00:05:21.225 } 00:05:21.225 } 00:05:21.225 ] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "sock", 00:05:21.225 "config": [ 00:05:21.225 { 00:05:21.225 "method": "sock_set_default_impl", 00:05:21.225 "params": { 00:05:21.225 "impl_name": "uring" 00:05:21.225 } 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "method": "sock_impl_set_options", 00:05:21.225 "params": { 00:05:21.225 "impl_name": "ssl", 00:05:21.225 "recv_buf_size": 4096, 00:05:21.225 "send_buf_size": 4096, 00:05:21.225 "enable_recv_pipe": true, 00:05:21.225 "enable_quickack": false, 00:05:21.225 "enable_placement_id": 0, 00:05:21.225 "enable_zerocopy_send_server": true, 00:05:21.225 "enable_zerocopy_send_client": false, 00:05:21.225 "zerocopy_threshold": 0, 00:05:21.225 "tls_version": 0, 00:05:21.225 "enable_ktls": false 00:05:21.225 } 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "method": "sock_impl_set_options", 00:05:21.225 "params": { 00:05:21.225 "impl_name": "posix", 00:05:21.225 "recv_buf_size": 2097152, 00:05:21.225 "send_buf_size": 2097152, 00:05:21.225 "enable_recv_pipe": true, 00:05:21.225 "enable_quickack": false, 00:05:21.225 "enable_placement_id": 0, 00:05:21.225 "enable_zerocopy_send_server": true, 00:05:21.225 "enable_zerocopy_send_client": false, 00:05:21.225 "zerocopy_threshold": 0, 00:05:21.225 "tls_version": 0, 00:05:21.225 "enable_ktls": false 00:05:21.225 } 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "method": "sock_impl_set_options", 00:05:21.225 "params": { 00:05:21.225 "impl_name": "uring", 00:05:21.225 "recv_buf_size": 2097152, 00:05:21.225 "send_buf_size": 2097152, 00:05:21.225 "enable_recv_pipe": true, 00:05:21.225 "enable_quickack": false, 00:05:21.225 "enable_placement_id": 0, 00:05:21.225 "enable_zerocopy_send_server": false, 00:05:21.225 "enable_zerocopy_send_client": false, 00:05:21.225 "zerocopy_threshold": 0, 00:05:21.225 "tls_version": 0, 00:05:21.225 "enable_ktls": false 00:05:21.225 } 00:05:21.225 } 00:05:21.225 ] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "vmd", 00:05:21.225 "config": [] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "accel", 00:05:21.225 "config": [ 00:05:21.225 { 00:05:21.225 "method": "accel_set_options", 00:05:21.225 "params": { 00:05:21.225 "small_cache_size": 128, 00:05:21.225 "large_cache_size": 16, 00:05:21.225 "task_count": 2048, 00:05:21.225 "sequence_count": 2048, 00:05:21.225 "buf_count": 2048 00:05:21.225 } 00:05:21.225 } 00:05:21.225 ] 00:05:21.225 }, 00:05:21.225 { 00:05:21.225 "subsystem": "bdev", 00:05:21.225 "config": [ 00:05:21.225 { 00:05:21.225 "method": "bdev_set_options", 00:05:21.225 "params": { 00:05:21.225 "bdev_io_pool_size": 65535, 00:05:21.225 "bdev_io_cache_size": 256, 00:05:21.226 "bdev_auto_examine": true, 00:05:21.226 "iobuf_small_cache_size": 128, 00:05:21.226 "iobuf_large_cache_size": 16 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "bdev_raid_set_options", 00:05:21.226 "params": { 00:05:21.226 "process_window_size_kb": 1024, 00:05:21.226 "process_max_bandwidth_mb_sec": 0 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "bdev_iscsi_set_options", 00:05:21.226 "params": { 00:05:21.226 "timeout_sec": 30 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "bdev_nvme_set_options", 00:05:21.226 "params": { 00:05:21.226 "action_on_timeout": "none", 00:05:21.226 "timeout_us": 0, 00:05:21.226 "timeout_admin_us": 0, 00:05:21.226 "keep_alive_timeout_ms": 10000, 00:05:21.226 "arbitration_burst": 0, 00:05:21.226 "low_priority_weight": 0, 00:05:21.226 "medium_priority_weight": 0, 00:05:21.226 "high_priority_weight": 0, 00:05:21.226 "nvme_adminq_poll_period_us": 10000, 00:05:21.226 "nvme_ioq_poll_period_us": 0, 00:05:21.226 "io_queue_requests": 0, 00:05:21.226 "delay_cmd_submit": true, 00:05:21.226 "transport_retry_count": 4, 00:05:21.226 "bdev_retry_count": 3, 00:05:21.226 "transport_ack_timeout": 0, 00:05:21.226 "ctrlr_loss_timeout_sec": 0, 00:05:21.226 "reconnect_delay_sec": 0, 00:05:21.226 "fast_io_fail_timeout_sec": 0, 00:05:21.226 "disable_auto_failback": false, 00:05:21.226 "generate_uuids": false, 00:05:21.226 "transport_tos": 0, 00:05:21.226 "nvme_error_stat": false, 00:05:21.226 "rdma_srq_size": 0, 00:05:21.226 "io_path_stat": false, 00:05:21.226 "allow_accel_sequence": false, 00:05:21.226 "rdma_max_cq_size": 0, 00:05:21.226 "rdma_cm_event_timeout_ms": 0, 00:05:21.226 "dhchap_digests": [ 00:05:21.226 "sha256", 00:05:21.226 "sha384", 00:05:21.226 "sha512" 00:05:21.226 ], 00:05:21.226 "dhchap_dhgroups": [ 00:05:21.226 "null", 00:05:21.226 "ffdhe2048", 00:05:21.226 "ffdhe3072", 00:05:21.226 "ffdhe4096", 00:05:21.226 "ffdhe6144", 00:05:21.226 "ffdhe8192" 00:05:21.226 ] 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "bdev_nvme_set_hotplug", 00:05:21.226 "params": { 00:05:21.226 "period_us": 100000, 00:05:21.226 "enable": false 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "bdev_wait_for_examine" 00:05:21.226 } 00:05:21.226 ] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "scsi", 00:05:21.226 "config": null 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "scheduler", 00:05:21.226 "config": [ 00:05:21.226 { 00:05:21.226 "method": "framework_set_scheduler", 00:05:21.226 "params": { 00:05:21.226 "name": "static" 00:05:21.226 } 00:05:21.226 } 00:05:21.226 ] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "vhost_scsi", 00:05:21.226 "config": [] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "vhost_blk", 00:05:21.226 "config": [] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "ublk", 00:05:21.226 "config": [] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "nbd", 00:05:21.226 "config": [] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "nvmf", 00:05:21.226 "config": [ 00:05:21.226 { 00:05:21.226 "method": "nvmf_set_config", 00:05:21.226 "params": { 00:05:21.226 "discovery_filter": "match_any", 00:05:21.226 "admin_cmd_passthru": { 00:05:21.226 "identify_ctrlr": false 00:05:21.226 }, 00:05:21.226 "dhchap_digests": [ 00:05:21.226 "sha256", 00:05:21.226 "sha384", 00:05:21.226 "sha512" 00:05:21.226 ], 00:05:21.226 "dhchap_dhgroups": [ 00:05:21.226 "null", 00:05:21.226 "ffdhe2048", 00:05:21.226 "ffdhe3072", 00:05:21.226 "ffdhe4096", 00:05:21.226 "ffdhe6144", 00:05:21.226 "ffdhe8192" 00:05:21.226 ] 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "nvmf_set_max_subsystems", 00:05:21.226 "params": { 00:05:21.226 "max_subsystems": 1024 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "nvmf_set_crdt", 00:05:21.226 "params": { 00:05:21.226 "crdt1": 0, 00:05:21.226 "crdt2": 0, 00:05:21.226 "crdt3": 0 00:05:21.226 } 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "method": "nvmf_create_transport", 00:05:21.226 "params": { 00:05:21.226 "trtype": "TCP", 00:05:21.226 "max_queue_depth": 128, 00:05:21.226 "max_io_qpairs_per_ctrlr": 127, 00:05:21.226 "in_capsule_data_size": 4096, 00:05:21.226 "max_io_size": 131072, 00:05:21.226 "io_unit_size": 131072, 00:05:21.226 "max_aq_depth": 128, 00:05:21.226 "num_shared_buffers": 511, 00:05:21.226 "buf_cache_size": 4294967295, 00:05:21.226 "dif_insert_or_strip": false, 00:05:21.226 "zcopy": false, 00:05:21.226 "c2h_success": true, 00:05:21.226 "sock_priority": 0, 00:05:21.226 "abort_timeout_sec": 1, 00:05:21.226 "ack_timeout": 0, 00:05:21.226 "data_wr_pool_size": 0 00:05:21.226 } 00:05:21.226 } 00:05:21.226 ] 00:05:21.226 }, 00:05:21.226 { 00:05:21.226 "subsystem": "iscsi", 00:05:21.226 "config": [ 00:05:21.226 { 00:05:21.226 "method": "iscsi_set_options", 00:05:21.226 "params": { 00:05:21.226 "node_base": "iqn.2016-06.io.spdk", 00:05:21.226 "max_sessions": 128, 00:05:21.226 "max_connections_per_session": 2, 00:05:21.226 "max_queue_depth": 64, 00:05:21.226 "default_time2wait": 2, 00:05:21.226 "default_time2retain": 20, 00:05:21.226 "first_burst_length": 8192, 00:05:21.226 "immediate_data": true, 00:05:21.226 "allow_duplicated_isid": false, 00:05:21.226 "error_recovery_level": 0, 00:05:21.226 "nop_timeout": 60, 00:05:21.226 "nop_in_interval": 30, 00:05:21.226 "disable_chap": false, 00:05:21.226 "require_chap": false, 00:05:21.226 "mutual_chap": false, 00:05:21.226 "chap_group": 0, 00:05:21.226 "max_large_datain_per_connection": 64, 00:05:21.226 "max_r2t_per_connection": 4, 00:05:21.226 "pdu_pool_size": 36864, 00:05:21.226 "immediate_data_pool_size": 16384, 00:05:21.226 "data_out_pool_size": 2048 00:05:21.226 } 00:05:21.226 } 00:05:21.226 ] 00:05:21.226 } 00:05:21.226 ] 00:05:21.226 } 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58175 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58175 ']' 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58175 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58175 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.226 killing process with pid 58175 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58175' 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58175 00:05:21.226 14:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58175 00:05:21.486 14:10:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58189 00:05:21.486 14:10:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.486 14:10:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58189 ']' 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.758 killing process with pid 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58189' 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58189 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.758 00:05:26.758 real 0m6.118s 00:05:26.758 user 0m5.870s 00:05:26.758 sys 0m0.421s 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.758 14:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 ************************************ 00:05:26.758 END TEST skip_rpc_with_json 00:05:26.758 ************************************ 00:05:26.758 14:10:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.758 14:10:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.758 14:10:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.758 14:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 ************************************ 00:05:26.758 START TEST skip_rpc_with_delay 00:05:26.759 ************************************ 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.759 [2024-12-10 14:10:51.460711] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.759 00:05:26.759 real 0m0.089s 00:05:26.759 user 0m0.059s 00:05:26.759 sys 0m0.029s 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.759 ************************************ 00:05:26.759 END TEST skip_rpc_with_delay 00:05:26.759 ************************************ 00:05:26.759 14:10:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.759 14:10:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.759 14:10:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.759 14:10:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.759 14:10:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.759 14:10:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.759 14:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.759 ************************************ 00:05:26.759 START TEST exit_on_failed_rpc_init 00:05:26.759 ************************************ 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58299 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58299 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58299 ']' 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.759 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.018 [2024-12-10 14:10:51.604852] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:27.018 [2024-12-10 14:10:51.604983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:05:27.018 [2024-12-10 14:10:51.746242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.018 [2024-12-10 14:10:51.774801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.018 [2024-12-10 14:10:51.812045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.277 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.277 [2024-12-10 14:10:52.008588] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:27.277 [2024-12-10 14:10:52.008702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58309 ] 00:05:27.536 [2024-12-10 14:10:52.159636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.536 [2024-12-10 14:10:52.197421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.536 [2024-12-10 14:10:52.197538] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.536 [2024-12-10 14:10:52.197561] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.536 [2024-12-10 14:10:52.197571] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58299 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58299 ']' 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58299 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58299 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.536 killing process with pid 58299 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58299' 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58299 00:05:27.536 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58299 00:05:27.795 00:05:27.795 real 0m0.973s 00:05:27.795 user 0m1.142s 00:05:27.795 sys 0m0.263s 00:05:27.795 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.795 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.795 ************************************ 00:05:27.795 END TEST exit_on_failed_rpc_init 00:05:27.795 ************************************ 00:05:27.795 14:10:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.795 00:05:27.795 real 0m12.855s 00:05:27.795 user 0m12.289s 00:05:27.795 sys 0m1.075s 00:05:27.795 14:10:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.795 14:10:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.795 ************************************ 00:05:27.795 END TEST skip_rpc 00:05:27.796 ************************************ 00:05:27.796 14:10:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.796 14:10:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.796 14:10:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.796 14:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.796 ************************************ 00:05:27.796 START TEST rpc_client 00:05:27.796 ************************************ 00:05:27.796 14:10:52 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.055 * Looking for test storage... 00:05:28.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.055 14:10:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.055 --rc genhtml_branch_coverage=1 00:05:28.055 --rc genhtml_function_coverage=1 00:05:28.055 --rc genhtml_legend=1 00:05:28.055 --rc geninfo_all_blocks=1 00:05:28.055 --rc geninfo_unexecuted_blocks=1 00:05:28.055 00:05:28.055 ' 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.055 --rc genhtml_branch_coverage=1 00:05:28.055 --rc genhtml_function_coverage=1 00:05:28.055 --rc genhtml_legend=1 00:05:28.055 --rc geninfo_all_blocks=1 00:05:28.055 --rc geninfo_unexecuted_blocks=1 00:05:28.055 00:05:28.055 ' 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.055 --rc genhtml_branch_coverage=1 00:05:28.055 --rc genhtml_function_coverage=1 00:05:28.055 --rc genhtml_legend=1 00:05:28.055 --rc geninfo_all_blocks=1 00:05:28.055 --rc geninfo_unexecuted_blocks=1 00:05:28.055 00:05:28.055 ' 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.055 --rc genhtml_branch_coverage=1 00:05:28.055 --rc genhtml_function_coverage=1 00:05:28.055 --rc genhtml_legend=1 00:05:28.055 --rc geninfo_all_blocks=1 00:05:28.055 --rc geninfo_unexecuted_blocks=1 00:05:28.055 00:05:28.055 ' 00:05:28.055 14:10:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:28.055 OK 00:05:28.055 14:10:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.055 00:05:28.055 real 0m0.205s 00:05:28.055 user 0m0.126s 00:05:28.055 sys 0m0.090s 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.055 ************************************ 00:05:28.055 END TEST rpc_client 00:05:28.055 ************************************ 00:05:28.055 14:10:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.055 14:10:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.055 14:10:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.055 14:10:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.055 14:10:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.055 ************************************ 00:05:28.055 START TEST json_config 00:05:28.055 ************************************ 00:05:28.055 14:10:52 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.315 14:10:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.315 14:10:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.315 14:10:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.315 14:10:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.315 14:10:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.315 14:10:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.315 14:10:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.315 14:10:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.315 14:10:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:28.315 14:10:53 json_config -- scripts/common.sh@345 -- # : 1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.315 14:10:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.315 14:10:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@353 -- # local d=1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.315 14:10:53 json_config -- scripts/common.sh@355 -- # echo 1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.315 14:10:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@353 -- # local d=2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.315 14:10:53 json_config -- scripts/common.sh@355 -- # echo 2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.315 14:10:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.315 14:10:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.315 14:10:53 json_config -- scripts/common.sh@368 -- # return 0 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.315 --rc genhtml_branch_coverage=1 00:05:28.315 --rc genhtml_function_coverage=1 00:05:28.315 --rc genhtml_legend=1 00:05:28.315 --rc geninfo_all_blocks=1 00:05:28.315 --rc geninfo_unexecuted_blocks=1 00:05:28.315 00:05:28.315 ' 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.315 --rc genhtml_branch_coverage=1 00:05:28.315 --rc genhtml_function_coverage=1 00:05:28.315 --rc genhtml_legend=1 00:05:28.315 --rc geninfo_all_blocks=1 00:05:28.315 --rc geninfo_unexecuted_blocks=1 00:05:28.315 00:05:28.315 ' 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.315 --rc genhtml_branch_coverage=1 00:05:28.315 --rc genhtml_function_coverage=1 00:05:28.315 --rc genhtml_legend=1 00:05:28.315 --rc geninfo_all_blocks=1 00:05:28.315 --rc geninfo_unexecuted_blocks=1 00:05:28.315 00:05:28.315 ' 00:05:28.315 14:10:53 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.315 --rc genhtml_branch_coverage=1 00:05:28.315 --rc genhtml_function_coverage=1 00:05:28.315 --rc genhtml_legend=1 00:05:28.315 --rc geninfo_all_blocks=1 00:05:28.315 --rc geninfo_unexecuted_blocks=1 00:05:28.315 00:05:28.315 ' 00:05:28.315 14:10:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.315 14:10:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.316 14:10:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.316 14:10:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.316 14:10:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.316 14:10:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.316 14:10:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.316 14:10:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.316 14:10:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.316 14:10:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.316 14:10:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@51 -- # : 0 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.316 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.316 14:10:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.316 INFO: JSON configuration test init 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 14:10:53 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.316 14:10:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.316 14:10:53 json_config -- json_config/common.sh@10 -- # shift 00:05:28.316 14:10:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.316 14:10:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.316 14:10:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.316 14:10:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.316 14:10:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.316 14:10:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58443 00:05:28.316 Waiting for target to run... 00:05:28.316 14:10:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.316 14:10:53 json_config -- json_config/common.sh@25 -- # waitforlisten 58443 /var/tmp/spdk_tgt.sock 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.316 14:10:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.316 14:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 [2024-12-10 14:10:53.117828] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:28.316 [2024-12-10 14:10:53.117938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58443 ] 00:05:28.575 [2024-12-10 14:10:53.400047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.834 [2024-12-10 14:10:53.423606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:29.402 00:05:29.402 14:10:54 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.402 14:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.402 14:10:54 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:29.402 14:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.970 [2024-12-10 14:10:54.515677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:29.970 14:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.970 14:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:29.970 14:10:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:29.970 14:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:30.229 14:10:54 json_config -- json_config/json_config.sh@54 -- # sort 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:30.229 14:10:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.229 14:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:30.229 14:10:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.229 14:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:30.229 14:10:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.229 14:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.487 MallocForNvmf0 00:05:30.487 14:10:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.487 14:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.745 MallocForNvmf1 00:05:30.745 14:10:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.745 14:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.004 [2024-12-10 14:10:55.769403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.004 14:10:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.004 14:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.262 14:10:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.262 14:10:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.522 14:10:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.522 14:10:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.781 14:10:56 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.781 14:10:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.040 [2024-12-10 14:10:56.721885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.040 14:10:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:32.040 14:10:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.040 14:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.040 14:10:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:32.040 14:10:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.040 14:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.040 14:10:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:32.040 14:10:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.040 14:10:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.299 MallocBdevForConfigChangeCheck 00:05:32.299 14:10:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:32.299 14:10:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.299 14:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.557 14:10:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:32.557 14:10:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.816 INFO: shutting down applications... 00:05:32.816 14:10:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:32.816 14:10:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:32.816 14:10:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:32.816 14:10:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:32.816 14:10:57 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.075 Calling clear_iscsi_subsystem 00:05:33.075 Calling clear_nvmf_subsystem 00:05:33.075 Calling clear_nbd_subsystem 00:05:33.075 Calling clear_ublk_subsystem 00:05:33.075 Calling clear_vhost_blk_subsystem 00:05:33.075 Calling clear_vhost_scsi_subsystem 00:05:33.075 Calling clear_bdev_subsystem 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.075 14:10:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.643 14:10:58 json_config -- json_config/json_config.sh@352 -- # break 00:05:33.643 14:10:58 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:33.643 14:10:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:33.643 14:10:58 json_config -- json_config/common.sh@31 -- # local app=target 00:05:33.643 14:10:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.643 14:10:58 json_config -- json_config/common.sh@35 -- # [[ -n 58443 ]] 00:05:33.643 14:10:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58443 00:05:33.643 14:10:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.643 14:10:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.643 14:10:58 json_config -- json_config/common.sh@41 -- # kill -0 58443 00:05:33.643 14:10:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.211 14:10:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.211 14:10:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.211 14:10:58 json_config -- json_config/common.sh@41 -- # kill -0 58443 00:05:34.211 14:10:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.211 14:10:58 json_config -- json_config/common.sh@43 -- # break 00:05:34.211 14:10:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.211 SPDK target shutdown done 00:05:34.211 14:10:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.211 INFO: relaunching applications... 00:05:34.211 14:10:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:34.211 14:10:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.211 14:10:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.211 14:10:58 json_config -- json_config/common.sh@10 -- # shift 00:05:34.211 14:10:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.211 14:10:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.211 14:10:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.211 14:10:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.211 14:10:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.211 14:10:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58639 00:05:34.211 Waiting for target to run... 00:05:34.211 14:10:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.211 14:10:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.211 14:10:58 json_config -- json_config/common.sh@25 -- # waitforlisten 58639 /var/tmp/spdk_tgt.sock 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 58639 ']' 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.211 14:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.211 [2024-12-10 14:10:58.823425] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:34.211 [2024-12-10 14:10:58.823527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58639 ] 00:05:34.470 [2024-12-10 14:10:59.120275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.470 [2024-12-10 14:10:59.141276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.470 [2024-12-10 14:10:59.272886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.729 [2024-12-10 14:10:59.468335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.729 [2024-12-10 14:10:59.500416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.988 14:10:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.988 14:10:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:34.988 00:05:34.988 14:10:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.988 14:10:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:34.988 INFO: Checking if target configuration is the same... 00:05:34.988 14:10:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:34.988 14:10:59 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.988 14:10:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:34.988 14:10:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.988 + '[' 2 -ne 2 ']' 00:05:34.988 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:34.988 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:34.988 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:34.988 +++ basename /dev/fd/62 00:05:34.988 ++ mktemp /tmp/62.XXX 00:05:34.988 + tmp_file_1=/tmp/62.fs2 00:05:34.988 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.988 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.988 + tmp_file_2=/tmp/spdk_tgt_config.json.Zwm 00:05:34.988 + ret=0 00:05:34.988 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.556 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.556 + diff -u /tmp/62.fs2 /tmp/spdk_tgt_config.json.Zwm 00:05:35.556 INFO: JSON config files are the same 00:05:35.556 + echo 'INFO: JSON config files are the same' 00:05:35.556 + rm /tmp/62.fs2 /tmp/spdk_tgt_config.json.Zwm 00:05:35.556 + exit 0 00:05:35.556 14:11:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:35.556 INFO: changing configuration and checking if this can be detected... 00:05:35.556 14:11:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:35.556 14:11:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.556 14:11:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.815 14:11:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:35.815 14:11:00 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.815 14:11:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.815 + '[' 2 -ne 2 ']' 00:05:35.815 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:35.815 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:35.815 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:35.815 +++ basename /dev/fd/62 00:05:35.815 ++ mktemp /tmp/62.XXX 00:05:35.815 + tmp_file_1=/tmp/62.G13 00:05:35.815 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.815 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.815 + tmp_file_2=/tmp/spdk_tgt_config.json.slc 00:05:35.815 + ret=0 00:05:35.815 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.387 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.387 + diff -u /tmp/62.G13 /tmp/spdk_tgt_config.json.slc 00:05:36.387 + ret=1 00:05:36.387 + echo '=== Start of file: /tmp/62.G13 ===' 00:05:36.387 + cat /tmp/62.G13 00:05:36.387 + echo '=== End of file: /tmp/62.G13 ===' 00:05:36.387 + echo '' 00:05:36.387 + echo '=== Start of file: /tmp/spdk_tgt_config.json.slc ===' 00:05:36.387 + cat /tmp/spdk_tgt_config.json.slc 00:05:36.387 + echo '=== End of file: /tmp/spdk_tgt_config.json.slc ===' 00:05:36.387 + echo '' 00:05:36.387 + rm /tmp/62.G13 /tmp/spdk_tgt_config.json.slc 00:05:36.387 + exit 1 00:05:36.387 INFO: configuration change detected. 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 58639 ]] 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.387 14:11:01 json_config -- json_config/json_config.sh@330 -- # killprocess 58639 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@954 -- # '[' -z 58639 ']' 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@958 -- # kill -0 58639 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@959 -- # uname 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58639 00:05:36.387 killing process with pid 58639 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58639' 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@973 -- # kill 58639 00:05:36.387 14:11:01 json_config -- common/autotest_common.sh@978 -- # wait 58639 00:05:36.650 14:11:01 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.650 14:11:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:36.650 14:11:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.650 14:11:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.650 INFO: Success 00:05:36.650 14:11:01 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:36.650 14:11:01 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:36.650 ************************************ 00:05:36.650 END TEST json_config 00:05:36.650 ************************************ 00:05:36.650 00:05:36.650 real 0m8.462s 00:05:36.650 user 0m12.386s 00:05:36.650 sys 0m1.402s 00:05:36.650 14:11:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.650 14:11:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.650 14:11:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.650 14:11:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.650 14:11:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.650 14:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.650 ************************************ 00:05:36.650 START TEST json_config_extra_key 00:05:36.650 ************************************ 00:05:36.650 14:11:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.650 14:11:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.650 14:11:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.650 14:11:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.909 14:11:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.909 14:11:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.909 14:11:01 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.909 14:11:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.909 --rc genhtml_branch_coverage=1 00:05:36.910 --rc genhtml_function_coverage=1 00:05:36.910 --rc genhtml_legend=1 00:05:36.910 --rc geninfo_all_blocks=1 00:05:36.910 --rc geninfo_unexecuted_blocks=1 00:05:36.910 00:05:36.910 ' 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.910 --rc genhtml_branch_coverage=1 00:05:36.910 --rc genhtml_function_coverage=1 00:05:36.910 --rc genhtml_legend=1 00:05:36.910 --rc geninfo_all_blocks=1 00:05:36.910 --rc geninfo_unexecuted_blocks=1 00:05:36.910 00:05:36.910 ' 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.910 --rc genhtml_branch_coverage=1 00:05:36.910 --rc genhtml_function_coverage=1 00:05:36.910 --rc genhtml_legend=1 00:05:36.910 --rc geninfo_all_blocks=1 00:05:36.910 --rc geninfo_unexecuted_blocks=1 00:05:36.910 00:05:36.910 ' 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.910 --rc genhtml_branch_coverage=1 00:05:36.910 --rc genhtml_function_coverage=1 00:05:36.910 --rc genhtml_legend=1 00:05:36.910 --rc geninfo_all_blocks=1 00:05:36.910 --rc geninfo_unexecuted_blocks=1 00:05:36.910 00:05:36.910 ' 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.910 14:11:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.910 14:11:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.910 14:11:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.910 14:11:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.910 14:11:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.910 14:11:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.910 14:11:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.910 14:11:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.910 14:11:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.910 14:11:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.910 INFO: launching applications... 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.910 14:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58793 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.910 Waiting for target to run... 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58793 /var/tmp/spdk_tgt.sock 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.910 14:11:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.910 14:11:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.910 [2024-12-10 14:11:01.634767] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:36.910 [2024-12-10 14:11:01.635297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:05:37.169 [2024-12-10 14:11:01.949682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.169 [2024-12-10 14:11:01.971563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.169 [2024-12-10 14:11:01.996142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.144 00:05:38.144 INFO: shutting down applications... 00:05:38.144 14:11:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.144 14:11:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:38.144 14:11:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:38.144 14:11:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58793 ]] 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58793 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58793 00:05:38.144 14:11:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58793 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.443 14:11:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.443 SPDK target shutdown done 00:05:38.443 14:11:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:38.443 Success 00:05:38.443 00:05:38.443 real 0m1.814s 00:05:38.443 user 0m1.703s 00:05:38.443 sys 0m0.325s 00:05:38.443 14:11:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.443 14:11:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.443 ************************************ 00:05:38.443 END TEST json_config_extra_key 00:05:38.443 ************************************ 00:05:38.443 14:11:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.443 14:11:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.443 14:11:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.443 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:38.443 ************************************ 00:05:38.443 START TEST alias_rpc 00:05:38.443 ************************************ 00:05:38.443 14:11:03 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.710 * Looking for test storage... 00:05:38.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.710 14:11:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.710 --rc genhtml_branch_coverage=1 00:05:38.710 --rc genhtml_function_coverage=1 00:05:38.710 --rc genhtml_legend=1 00:05:38.710 --rc geninfo_all_blocks=1 00:05:38.710 --rc geninfo_unexecuted_blocks=1 00:05:38.710 00:05:38.710 ' 00:05:38.710 14:11:03 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.711 --rc genhtml_branch_coverage=1 00:05:38.711 --rc genhtml_function_coverage=1 00:05:38.711 --rc genhtml_legend=1 00:05:38.711 --rc geninfo_all_blocks=1 00:05:38.711 --rc geninfo_unexecuted_blocks=1 00:05:38.711 00:05:38.711 ' 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.711 --rc genhtml_branch_coverage=1 00:05:38.711 --rc genhtml_function_coverage=1 00:05:38.711 --rc genhtml_legend=1 00:05:38.711 --rc geninfo_all_blocks=1 00:05:38.711 --rc geninfo_unexecuted_blocks=1 00:05:38.711 00:05:38.711 ' 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.711 --rc genhtml_branch_coverage=1 00:05:38.711 --rc genhtml_function_coverage=1 00:05:38.711 --rc genhtml_legend=1 00:05:38.711 --rc geninfo_all_blocks=1 00:05:38.711 --rc geninfo_unexecuted_blocks=1 00:05:38.711 00:05:38.711 ' 00:05:38.711 14:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.711 14:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58865 00:05:38.711 14:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58865 00:05:38.711 14:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58865 ']' 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.711 14:11:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.711 [2024-12-10 14:11:03.503827] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:38.711 [2024-12-10 14:11:03.504169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58865 ] 00:05:38.970 [2024-12-10 14:11:03.646597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.970 [2024-12-10 14:11:03.678088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.970 [2024-12-10 14:11:03.715253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.228 14:11:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.228 14:11:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.228 14:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.487 14:11:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58865 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58865 ']' 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58865 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58865 00:05:39.487 killing process with pid 58865 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58865' 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 58865 00:05:39.487 14:11:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 58865 00:05:39.746 ************************************ 00:05:39.746 END TEST alias_rpc 00:05:39.746 ************************************ 00:05:39.746 00:05:39.746 real 0m1.174s 00:05:39.746 user 0m1.357s 00:05:39.746 sys 0m0.318s 00:05:39.746 14:11:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.746 14:11:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 14:11:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:39.746 14:11:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.746 14:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.746 14:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.746 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 ************************************ 00:05:39.746 START TEST spdkcli_tcp 00:05:39.746 ************************************ 00:05:39.746 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.746 * Looking for test storage... 00:05:39.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.746 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.746 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.746 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.005 14:11:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.005 --rc genhtml_branch_coverage=1 00:05:40.005 --rc genhtml_function_coverage=1 00:05:40.005 --rc genhtml_legend=1 00:05:40.005 --rc geninfo_all_blocks=1 00:05:40.005 --rc geninfo_unexecuted_blocks=1 00:05:40.005 00:05:40.005 ' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.005 --rc genhtml_branch_coverage=1 00:05:40.005 --rc genhtml_function_coverage=1 00:05:40.005 --rc genhtml_legend=1 00:05:40.005 --rc geninfo_all_blocks=1 00:05:40.005 --rc geninfo_unexecuted_blocks=1 00:05:40.005 00:05:40.005 ' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.005 --rc genhtml_branch_coverage=1 00:05:40.005 --rc genhtml_function_coverage=1 00:05:40.005 --rc genhtml_legend=1 00:05:40.005 --rc geninfo_all_blocks=1 00:05:40.005 --rc geninfo_unexecuted_blocks=1 00:05:40.005 00:05:40.005 ' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.005 --rc genhtml_branch_coverage=1 00:05:40.005 --rc genhtml_function_coverage=1 00:05:40.005 --rc genhtml_legend=1 00:05:40.005 --rc geninfo_all_blocks=1 00:05:40.005 --rc geninfo_unexecuted_blocks=1 00:05:40.005 00:05:40.005 ' 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58942 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58942 00:05:40.005 14:11:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58942 ']' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.005 14:11:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.005 [2024-12-10 14:11:04.732500] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:40.005 [2024-12-10 14:11:04.732798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58942 ] 00:05:40.264 [2024-12-10 14:11:04.878108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.264 [2024-12-10 14:11:04.908191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.264 [2024-12-10 14:11:04.908198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.264 [2024-12-10 14:11:04.946083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.264 14:11:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.264 14:11:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:40.264 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58946 00:05:40.264 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.264 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.522 [ 00:05:40.522 "bdev_malloc_delete", 00:05:40.522 "bdev_malloc_create", 00:05:40.522 "bdev_null_resize", 00:05:40.522 "bdev_null_delete", 00:05:40.522 "bdev_null_create", 00:05:40.522 "bdev_nvme_cuse_unregister", 00:05:40.522 "bdev_nvme_cuse_register", 00:05:40.522 "bdev_opal_new_user", 00:05:40.522 "bdev_opal_set_lock_state", 00:05:40.522 "bdev_opal_delete", 00:05:40.522 "bdev_opal_get_info", 00:05:40.522 "bdev_opal_create", 00:05:40.522 "bdev_nvme_opal_revert", 00:05:40.522 "bdev_nvme_opal_init", 00:05:40.522 "bdev_nvme_send_cmd", 00:05:40.522 "bdev_nvme_set_keys", 00:05:40.522 "bdev_nvme_get_path_iostat", 00:05:40.522 "bdev_nvme_get_mdns_discovery_info", 00:05:40.522 "bdev_nvme_stop_mdns_discovery", 00:05:40.522 "bdev_nvme_start_mdns_discovery", 00:05:40.522 "bdev_nvme_set_multipath_policy", 00:05:40.522 "bdev_nvme_set_preferred_path", 00:05:40.522 "bdev_nvme_get_io_paths", 00:05:40.522 "bdev_nvme_remove_error_injection", 00:05:40.522 "bdev_nvme_add_error_injection", 00:05:40.522 "bdev_nvme_get_discovery_info", 00:05:40.522 "bdev_nvme_stop_discovery", 00:05:40.522 "bdev_nvme_start_discovery", 00:05:40.522 "bdev_nvme_get_controller_health_info", 00:05:40.522 "bdev_nvme_disable_controller", 00:05:40.522 "bdev_nvme_enable_controller", 00:05:40.522 "bdev_nvme_reset_controller", 00:05:40.522 "bdev_nvme_get_transport_statistics", 00:05:40.522 "bdev_nvme_apply_firmware", 00:05:40.522 "bdev_nvme_detach_controller", 00:05:40.522 "bdev_nvme_get_controllers", 00:05:40.522 "bdev_nvme_attach_controller", 00:05:40.522 "bdev_nvme_set_hotplug", 00:05:40.522 "bdev_nvme_set_options", 00:05:40.522 "bdev_passthru_delete", 00:05:40.522 "bdev_passthru_create", 00:05:40.522 "bdev_lvol_set_parent_bdev", 00:05:40.522 "bdev_lvol_set_parent", 00:05:40.522 "bdev_lvol_check_shallow_copy", 00:05:40.522 "bdev_lvol_start_shallow_copy", 00:05:40.522 "bdev_lvol_grow_lvstore", 00:05:40.522 "bdev_lvol_get_lvols", 00:05:40.522 "bdev_lvol_get_lvstores", 00:05:40.522 "bdev_lvol_delete", 00:05:40.522 "bdev_lvol_set_read_only", 00:05:40.522 "bdev_lvol_resize", 00:05:40.522 "bdev_lvol_decouple_parent", 00:05:40.522 "bdev_lvol_inflate", 00:05:40.522 "bdev_lvol_rename", 00:05:40.522 "bdev_lvol_clone_bdev", 00:05:40.522 "bdev_lvol_clone", 00:05:40.522 "bdev_lvol_snapshot", 00:05:40.522 "bdev_lvol_create", 00:05:40.522 "bdev_lvol_delete_lvstore", 00:05:40.522 "bdev_lvol_rename_lvstore", 00:05:40.522 "bdev_lvol_create_lvstore", 00:05:40.522 "bdev_raid_set_options", 00:05:40.522 "bdev_raid_remove_base_bdev", 00:05:40.522 "bdev_raid_add_base_bdev", 00:05:40.522 "bdev_raid_delete", 00:05:40.522 "bdev_raid_create", 00:05:40.522 "bdev_raid_get_bdevs", 00:05:40.522 "bdev_error_inject_error", 00:05:40.522 "bdev_error_delete", 00:05:40.522 "bdev_error_create", 00:05:40.522 "bdev_split_delete", 00:05:40.522 "bdev_split_create", 00:05:40.522 "bdev_delay_delete", 00:05:40.522 "bdev_delay_create", 00:05:40.522 "bdev_delay_update_latency", 00:05:40.522 "bdev_zone_block_delete", 00:05:40.522 "bdev_zone_block_create", 00:05:40.522 "blobfs_create", 00:05:40.522 "blobfs_detect", 00:05:40.522 "blobfs_set_cache_size", 00:05:40.522 "bdev_aio_delete", 00:05:40.522 "bdev_aio_rescan", 00:05:40.522 "bdev_aio_create", 00:05:40.522 "bdev_ftl_set_property", 00:05:40.522 "bdev_ftl_get_properties", 00:05:40.522 "bdev_ftl_get_stats", 00:05:40.522 "bdev_ftl_unmap", 00:05:40.522 "bdev_ftl_unload", 00:05:40.522 "bdev_ftl_delete", 00:05:40.522 "bdev_ftl_load", 00:05:40.522 "bdev_ftl_create", 00:05:40.522 "bdev_virtio_attach_controller", 00:05:40.522 "bdev_virtio_scsi_get_devices", 00:05:40.522 "bdev_virtio_detach_controller", 00:05:40.522 "bdev_virtio_blk_set_hotplug", 00:05:40.522 "bdev_iscsi_delete", 00:05:40.522 "bdev_iscsi_create", 00:05:40.522 "bdev_iscsi_set_options", 00:05:40.522 "bdev_uring_delete", 00:05:40.522 "bdev_uring_rescan", 00:05:40.522 "bdev_uring_create", 00:05:40.522 "accel_error_inject_error", 00:05:40.522 "ioat_scan_accel_module", 00:05:40.522 "dsa_scan_accel_module", 00:05:40.522 "iaa_scan_accel_module", 00:05:40.522 "keyring_file_remove_key", 00:05:40.522 "keyring_file_add_key", 00:05:40.522 "keyring_linux_set_options", 00:05:40.522 "fsdev_aio_delete", 00:05:40.522 "fsdev_aio_create", 00:05:40.522 "iscsi_get_histogram", 00:05:40.522 "iscsi_enable_histogram", 00:05:40.522 "iscsi_set_options", 00:05:40.522 "iscsi_get_auth_groups", 00:05:40.522 "iscsi_auth_group_remove_secret", 00:05:40.522 "iscsi_auth_group_add_secret", 00:05:40.522 "iscsi_delete_auth_group", 00:05:40.522 "iscsi_create_auth_group", 00:05:40.522 "iscsi_set_discovery_auth", 00:05:40.522 "iscsi_get_options", 00:05:40.522 "iscsi_target_node_request_logout", 00:05:40.522 "iscsi_target_node_set_redirect", 00:05:40.522 "iscsi_target_node_set_auth", 00:05:40.522 "iscsi_target_node_add_lun", 00:05:40.522 "iscsi_get_stats", 00:05:40.522 "iscsi_get_connections", 00:05:40.522 "iscsi_portal_group_set_auth", 00:05:40.522 "iscsi_start_portal_group", 00:05:40.522 "iscsi_delete_portal_group", 00:05:40.522 "iscsi_create_portal_group", 00:05:40.522 "iscsi_get_portal_groups", 00:05:40.522 "iscsi_delete_target_node", 00:05:40.522 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.522 "iscsi_target_node_add_pg_ig_maps", 00:05:40.522 "iscsi_create_target_node", 00:05:40.522 "iscsi_get_target_nodes", 00:05:40.522 "iscsi_delete_initiator_group", 00:05:40.522 "iscsi_initiator_group_remove_initiators", 00:05:40.522 "iscsi_initiator_group_add_initiators", 00:05:40.522 "iscsi_create_initiator_group", 00:05:40.522 "iscsi_get_initiator_groups", 00:05:40.522 "nvmf_set_crdt", 00:05:40.522 "nvmf_set_config", 00:05:40.522 "nvmf_set_max_subsystems", 00:05:40.522 "nvmf_stop_mdns_prr", 00:05:40.522 "nvmf_publish_mdns_prr", 00:05:40.522 "nvmf_subsystem_get_listeners", 00:05:40.522 "nvmf_subsystem_get_qpairs", 00:05:40.522 "nvmf_subsystem_get_controllers", 00:05:40.522 "nvmf_get_stats", 00:05:40.522 "nvmf_get_transports", 00:05:40.522 "nvmf_create_transport", 00:05:40.522 "nvmf_get_targets", 00:05:40.522 "nvmf_delete_target", 00:05:40.522 "nvmf_create_target", 00:05:40.522 "nvmf_subsystem_allow_any_host", 00:05:40.522 "nvmf_subsystem_set_keys", 00:05:40.522 "nvmf_subsystem_remove_host", 00:05:40.522 "nvmf_subsystem_add_host", 00:05:40.522 "nvmf_ns_remove_host", 00:05:40.522 "nvmf_ns_add_host", 00:05:40.522 "nvmf_subsystem_remove_ns", 00:05:40.522 "nvmf_subsystem_set_ns_ana_group", 00:05:40.522 "nvmf_subsystem_add_ns", 00:05:40.522 "nvmf_subsystem_listener_set_ana_state", 00:05:40.522 "nvmf_discovery_get_referrals", 00:05:40.522 "nvmf_discovery_remove_referral", 00:05:40.522 "nvmf_discovery_add_referral", 00:05:40.522 "nvmf_subsystem_remove_listener", 00:05:40.522 "nvmf_subsystem_add_listener", 00:05:40.522 "nvmf_delete_subsystem", 00:05:40.522 "nvmf_create_subsystem", 00:05:40.522 "nvmf_get_subsystems", 00:05:40.522 "env_dpdk_get_mem_stats", 00:05:40.522 "nbd_get_disks", 00:05:40.522 "nbd_stop_disk", 00:05:40.522 "nbd_start_disk", 00:05:40.522 "ublk_recover_disk", 00:05:40.522 "ublk_get_disks", 00:05:40.522 "ublk_stop_disk", 00:05:40.522 "ublk_start_disk", 00:05:40.522 "ublk_destroy_target", 00:05:40.522 "ublk_create_target", 00:05:40.522 "virtio_blk_create_transport", 00:05:40.522 "virtio_blk_get_transports", 00:05:40.522 "vhost_controller_set_coalescing", 00:05:40.522 "vhost_get_controllers", 00:05:40.522 "vhost_delete_controller", 00:05:40.522 "vhost_create_blk_controller", 00:05:40.522 "vhost_scsi_controller_remove_target", 00:05:40.522 "vhost_scsi_controller_add_target", 00:05:40.522 "vhost_start_scsi_controller", 00:05:40.522 "vhost_create_scsi_controller", 00:05:40.522 "thread_set_cpumask", 00:05:40.522 "scheduler_set_options", 00:05:40.522 "framework_get_governor", 00:05:40.522 "framework_get_scheduler", 00:05:40.522 "framework_set_scheduler", 00:05:40.522 "framework_get_reactors", 00:05:40.522 "thread_get_io_channels", 00:05:40.522 "thread_get_pollers", 00:05:40.522 "thread_get_stats", 00:05:40.522 "framework_monitor_context_switch", 00:05:40.522 "spdk_kill_instance", 00:05:40.522 "log_enable_timestamps", 00:05:40.522 "log_get_flags", 00:05:40.522 "log_clear_flag", 00:05:40.522 "log_set_flag", 00:05:40.522 "log_get_level", 00:05:40.522 "log_set_level", 00:05:40.522 "log_get_print_level", 00:05:40.522 "log_set_print_level", 00:05:40.522 "framework_enable_cpumask_locks", 00:05:40.522 "framework_disable_cpumask_locks", 00:05:40.522 "framework_wait_init", 00:05:40.522 "framework_start_init", 00:05:40.522 "scsi_get_devices", 00:05:40.522 "bdev_get_histogram", 00:05:40.522 "bdev_enable_histogram", 00:05:40.522 "bdev_set_qos_limit", 00:05:40.522 "bdev_set_qd_sampling_period", 00:05:40.522 "bdev_get_bdevs", 00:05:40.522 "bdev_reset_iostat", 00:05:40.522 "bdev_get_iostat", 00:05:40.522 "bdev_examine", 00:05:40.522 "bdev_wait_for_examine", 00:05:40.522 "bdev_set_options", 00:05:40.522 "accel_get_stats", 00:05:40.522 "accel_set_options", 00:05:40.522 "accel_set_driver", 00:05:40.522 "accel_crypto_key_destroy", 00:05:40.522 "accel_crypto_keys_get", 00:05:40.522 "accel_crypto_key_create", 00:05:40.522 "accel_assign_opc", 00:05:40.522 "accel_get_module_info", 00:05:40.522 "accel_get_opc_assignments", 00:05:40.522 "vmd_rescan", 00:05:40.522 "vmd_remove_device", 00:05:40.522 "vmd_enable", 00:05:40.522 "sock_get_default_impl", 00:05:40.522 "sock_set_default_impl", 00:05:40.522 "sock_impl_set_options", 00:05:40.522 "sock_impl_get_options", 00:05:40.522 "iobuf_get_stats", 00:05:40.522 "iobuf_set_options", 00:05:40.522 "keyring_get_keys", 00:05:40.522 "framework_get_pci_devices", 00:05:40.522 "framework_get_config", 00:05:40.522 "framework_get_subsystems", 00:05:40.522 "fsdev_set_opts", 00:05:40.522 "fsdev_get_opts", 00:05:40.523 "trace_get_info", 00:05:40.523 "trace_get_tpoint_group_mask", 00:05:40.523 "trace_disable_tpoint_group", 00:05:40.523 "trace_enable_tpoint_group", 00:05:40.523 "trace_clear_tpoint_mask", 00:05:40.523 "trace_set_tpoint_mask", 00:05:40.523 "notify_get_notifications", 00:05:40.523 "notify_get_types", 00:05:40.523 "spdk_get_version", 00:05:40.523 "rpc_get_methods" 00:05:40.523 ] 00:05:40.523 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.523 14:11:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.523 14:11:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.780 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.780 14:11:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58942 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58942 ']' 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58942 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58942 00:05:40.780 killing process with pid 58942 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58942' 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58942 00:05:40.780 14:11:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58942 00:05:41.039 ************************************ 00:05:41.039 END TEST spdkcli_tcp 00:05:41.039 ************************************ 00:05:41.039 00:05:41.039 real 0m1.170s 00:05:41.039 user 0m2.036s 00:05:41.039 sys 0m0.350s 00:05:41.039 14:11:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.039 14:11:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.039 14:11:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.039 14:11:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.039 14:11:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.039 14:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:41.039 ************************************ 00:05:41.039 START TEST dpdk_mem_utility 00:05:41.039 ************************************ 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.039 * Looking for test storage... 00:05:41.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.039 14:11:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.039 --rc genhtml_branch_coverage=1 00:05:41.039 --rc genhtml_function_coverage=1 00:05:41.039 --rc genhtml_legend=1 00:05:41.039 --rc geninfo_all_blocks=1 00:05:41.039 --rc geninfo_unexecuted_blocks=1 00:05:41.039 00:05:41.039 ' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.039 --rc genhtml_branch_coverage=1 00:05:41.039 --rc genhtml_function_coverage=1 00:05:41.039 --rc genhtml_legend=1 00:05:41.039 --rc geninfo_all_blocks=1 00:05:41.039 --rc geninfo_unexecuted_blocks=1 00:05:41.039 00:05:41.039 ' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.039 --rc genhtml_branch_coverage=1 00:05:41.039 --rc genhtml_function_coverage=1 00:05:41.039 --rc genhtml_legend=1 00:05:41.039 --rc geninfo_all_blocks=1 00:05:41.039 --rc geninfo_unexecuted_blocks=1 00:05:41.039 00:05:41.039 ' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.039 --rc genhtml_branch_coverage=1 00:05:41.039 --rc genhtml_function_coverage=1 00:05:41.039 --rc genhtml_legend=1 00:05:41.039 --rc geninfo_all_blocks=1 00:05:41.039 --rc geninfo_unexecuted_blocks=1 00:05:41.039 00:05:41.039 ' 00:05:41.039 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.039 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59028 00:05:41.039 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59028 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59028 ']' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.039 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.039 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.298 [2024-12-10 14:11:05.926016] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:41.298 [2024-12-10 14:11:05.926122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59028 ] 00:05:41.298 [2024-12-10 14:11:06.072006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.298 [2024-12-10 14:11:06.100025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.557 [2024-12-10 14:11:06.136606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.557 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.557 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:41.557 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:41.557 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:41.557 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.557 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.557 { 00:05:41.557 "filename": "/tmp/spdk_mem_dump.txt" 00:05:41.557 } 00:05:41.557 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.557 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.557 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:41.557 1 heaps totaling size 818.000000 MiB 00:05:41.557 size: 818.000000 MiB heap id: 0 00:05:41.557 end heaps---------- 00:05:41.557 9 mempools totaling size 603.782043 MiB 00:05:41.557 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:41.557 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:41.557 size: 100.555481 MiB name: bdev_io_59028 00:05:41.557 size: 50.003479 MiB name: msgpool_59028 00:05:41.557 size: 36.509338 MiB name: fsdev_io_59028 00:05:41.557 size: 21.763794 MiB name: PDU_Pool 00:05:41.557 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:41.557 size: 4.133484 MiB name: evtpool_59028 00:05:41.557 size: 0.026123 MiB name: Session_Pool 00:05:41.557 end mempools------- 00:05:41.557 6 memzones totaling size 4.142822 MiB 00:05:41.557 size: 1.000366 MiB name: RG_ring_0_59028 00:05:41.557 size: 1.000366 MiB name: RG_ring_1_59028 00:05:41.557 size: 1.000366 MiB name: RG_ring_4_59028 00:05:41.557 size: 1.000366 MiB name: RG_ring_5_59028 00:05:41.557 size: 0.125366 MiB name: RG_ring_2_59028 00:05:41.557 size: 0.015991 MiB name: RG_ring_3_59028 00:05:41.557 end memzones------- 00:05:41.557 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:41.847 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:05:41.847 list of free elements. size: 10.802673 MiB 00:05:41.847 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:41.847 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:41.847 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:41.847 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:41.847 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:41.847 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:41.847 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:41.847 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:41.847 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:05:41.847 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:41.847 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:41.847 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:41.847 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:41.847 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:41.847 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:41.847 list of standard malloc elements. size: 199.268433 MiB 00:05:41.847 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:41.847 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:41.847 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:41.847 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:41.847 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:41.847 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:41.847 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:41.847 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:41.847 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:41.847 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:41.847 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:41.847 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:41.848 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:41.848 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:41.848 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:41.848 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:41.848 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:41.848 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:41.849 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:41.849 list of memzone associated elements. size: 607.928894 MiB 00:05:41.849 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:41.849 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:41.849 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:41.849 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:41.849 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:41.849 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59028_0 00:05:41.849 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:41.849 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59028_0 00:05:41.849 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:41.849 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59028_0 00:05:41.849 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:41.849 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:41.849 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:41.849 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:41.849 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:41.849 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59028_0 00:05:41.849 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:41.849 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59028 00:05:41.849 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:41.849 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59028 00:05:41.849 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:41.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:41.849 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:41.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:41.849 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:41.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:41.849 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:41.849 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:41.849 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:41.849 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59028 00:05:41.849 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:41.849 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59028 00:05:41.849 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:41.849 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59028 00:05:41.849 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:41.849 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59028 00:05:41.849 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:41.849 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59028 00:05:41.849 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:41.849 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59028 00:05:41.849 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:41.849 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:41.849 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:41.849 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:41.849 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:41.849 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:41.849 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:41.849 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59028 00:05:41.849 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:41.849 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59028 00:05:41.849 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:41.849 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:41.849 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:41.849 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:41.849 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:41.849 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59028 00:05:41.849 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:41.849 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:41.849 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:41.849 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59028 00:05:41.849 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:41.849 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59028 00:05:41.849 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:41.849 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59028 00:05:41.849 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:41.849 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:41.849 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:41.849 14:11:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59028 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59028 ']' 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59028 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59028 00:05:41.849 killing process with pid 59028 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59028' 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59028 00:05:41.849 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59028 00:05:42.109 00:05:42.109 real 0m0.991s 00:05:42.109 user 0m1.065s 00:05:42.109 sys 0m0.318s 00:05:42.109 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.109 ************************************ 00:05:42.109 END TEST dpdk_mem_utility 00:05:42.109 ************************************ 00:05:42.109 14:11:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.109 14:11:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.109 14:11:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.109 14:11:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.109 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.109 ************************************ 00:05:42.109 START TEST event 00:05:42.109 ************************************ 00:05:42.109 14:11:06 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.109 * Looking for test storage... 00:05:42.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.109 14:11:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.109 14:11:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.109 14:11:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.109 14:11:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.109 14:11:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.109 14:11:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.109 14:11:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.109 14:11:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.109 14:11:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.109 14:11:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.109 14:11:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.109 14:11:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.109 14:11:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.109 14:11:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.110 14:11:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.110 14:11:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.110 14:11:06 event -- scripts/common.sh@345 -- # : 1 00:05:42.110 14:11:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.110 14:11:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.110 14:11:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.110 14:11:06 event -- scripts/common.sh@353 -- # local d=1 00:05:42.110 14:11:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.110 14:11:06 event -- scripts/common.sh@355 -- # echo 1 00:05:42.110 14:11:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.110 14:11:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.110 14:11:06 event -- scripts/common.sh@353 -- # local d=2 00:05:42.110 14:11:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.110 14:11:06 event -- scripts/common.sh@355 -- # echo 2 00:05:42.110 14:11:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.110 14:11:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.110 14:11:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.110 14:11:06 event -- scripts/common.sh@368 -- # return 0 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.110 --rc genhtml_branch_coverage=1 00:05:42.110 --rc genhtml_function_coverage=1 00:05:42.110 --rc genhtml_legend=1 00:05:42.110 --rc geninfo_all_blocks=1 00:05:42.110 --rc geninfo_unexecuted_blocks=1 00:05:42.110 00:05:42.110 ' 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.110 --rc genhtml_branch_coverage=1 00:05:42.110 --rc genhtml_function_coverage=1 00:05:42.110 --rc genhtml_legend=1 00:05:42.110 --rc geninfo_all_blocks=1 00:05:42.110 --rc geninfo_unexecuted_blocks=1 00:05:42.110 00:05:42.110 ' 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.110 --rc genhtml_branch_coverage=1 00:05:42.110 --rc genhtml_function_coverage=1 00:05:42.110 --rc genhtml_legend=1 00:05:42.110 --rc geninfo_all_blocks=1 00:05:42.110 --rc geninfo_unexecuted_blocks=1 00:05:42.110 00:05:42.110 ' 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.110 --rc genhtml_branch_coverage=1 00:05:42.110 --rc genhtml_function_coverage=1 00:05:42.110 --rc genhtml_legend=1 00:05:42.110 --rc geninfo_all_blocks=1 00:05:42.110 --rc geninfo_unexecuted_blocks=1 00:05:42.110 00:05:42.110 ' 00:05:42.110 14:11:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.110 14:11:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.110 14:11:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:42.110 14:11:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.110 14:11:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 ************************************ 00:05:42.110 START TEST event_perf 00:05:42.110 ************************************ 00:05:42.110 14:11:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.369 Running I/O for 1 seconds...[2024-12-10 14:11:06.964548] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:42.369 [2024-12-10 14:11:06.965285] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:05:42.369 [2024-12-10 14:11:07.112956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.369 [2024-12-10 14:11:07.142879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.369 [2024-12-10 14:11:07.143014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.369 [2024-12-10 14:11:07.143149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.369 [2024-12-10 14:11:07.143153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.746 Running I/O for 1 seconds... 00:05:43.746 lcore 0: 194848 00:05:43.746 lcore 1: 194845 00:05:43.746 lcore 2: 194845 00:05:43.746 lcore 3: 194846 00:05:43.746 done. 00:05:43.746 00:05:43.746 ************************************ 00:05:43.746 END TEST event_perf 00:05:43.746 ************************************ 00:05:43.746 real 0m1.238s 00:05:43.746 user 0m4.073s 00:05:43.746 sys 0m0.046s 00:05:43.746 14:11:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.746 14:11:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.746 14:11:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:43.746 14:11:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:43.746 14:11:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.746 14:11:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.746 ************************************ 00:05:43.746 START TEST event_reactor 00:05:43.746 ************************************ 00:05:43.746 14:11:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:43.746 [2024-12-10 14:11:08.252628] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:43.746 [2024-12-10 14:11:08.252717] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59133 ] 00:05:43.746 [2024-12-10 14:11:08.395532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.746 [2024-12-10 14:11:08.426586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.682 test_start 00:05:44.682 oneshot 00:05:44.682 tick 100 00:05:44.682 tick 100 00:05:44.682 tick 250 00:05:44.682 tick 100 00:05:44.682 tick 100 00:05:44.682 tick 100 00:05:44.682 tick 250 00:05:44.682 tick 500 00:05:44.682 tick 100 00:05:44.682 tick 100 00:05:44.682 tick 250 00:05:44.682 tick 100 00:05:44.682 tick 100 00:05:44.682 test_end 00:05:44.682 00:05:44.682 real 0m1.237s 00:05:44.682 user 0m1.095s 00:05:44.682 sys 0m0.037s 00:05:44.682 ************************************ 00:05:44.682 END TEST event_reactor 00:05:44.682 ************************************ 00:05:44.682 14:11:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.682 14:11:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:44.682 14:11:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.682 14:11:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.682 14:11:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.682 14:11:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.941 ************************************ 00:05:44.941 START TEST event_reactor_perf 00:05:44.941 ************************************ 00:05:44.941 14:11:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.941 [2024-12-10 14:11:09.543332] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:44.941 [2024-12-10 14:11:09.543422] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:44.941 [2024-12-10 14:11:09.694001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.941 [2024-12-10 14:11:09.722596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.318 test_start 00:05:46.318 test_end 00:05:46.318 Performance: 434405 events per second 00:05:46.318 00:05:46.318 real 0m1.238s 00:05:46.318 user 0m1.096s 00:05:46.318 sys 0m0.035s 00:05:46.318 14:11:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.318 14:11:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.318 ************************************ 00:05:46.318 END TEST event_reactor_perf 00:05:46.318 ************************************ 00:05:46.318 14:11:10 event -- event/event.sh@49 -- # uname -s 00:05:46.318 14:11:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.318 14:11:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.318 14:11:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.318 14:11:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.318 14:11:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.318 ************************************ 00:05:46.318 START TEST event_scheduler 00:05:46.318 ************************************ 00:05:46.318 14:11:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.318 * Looking for test storage... 00:05:46.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:46.318 14:11:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.318 14:11:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.318 14:11:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.318 14:11:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.318 14:11:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.318 14:11:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.318 14:11:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.318 14:11:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.318 --rc genhtml_branch_coverage=1 00:05:46.318 --rc genhtml_function_coverage=1 00:05:46.318 --rc genhtml_legend=1 00:05:46.318 --rc geninfo_all_blocks=1 00:05:46.318 --rc geninfo_unexecuted_blocks=1 00:05:46.318 00:05:46.318 ' 00:05:46.318 14:11:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.318 --rc genhtml_branch_coverage=1 00:05:46.318 --rc genhtml_function_coverage=1 00:05:46.318 --rc genhtml_legend=1 00:05:46.318 --rc geninfo_all_blocks=1 00:05:46.318 --rc geninfo_unexecuted_blocks=1 00:05:46.318 00:05:46.318 ' 00:05:46.318 14:11:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.318 --rc genhtml_branch_coverage=1 00:05:46.318 --rc genhtml_function_coverage=1 00:05:46.318 --rc genhtml_legend=1 00:05:46.318 --rc geninfo_all_blocks=1 00:05:46.318 --rc geninfo_unexecuted_blocks=1 00:05:46.318 00:05:46.318 ' 00:05:46.318 14:11:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.318 --rc genhtml_branch_coverage=1 00:05:46.318 --rc genhtml_function_coverage=1 00:05:46.318 --rc genhtml_legend=1 00:05:46.318 --rc geninfo_all_blocks=1 00:05:46.318 --rc geninfo_unexecuted_blocks=1 00:05:46.319 00:05:46.319 ' 00:05:46.319 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.319 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59238 00:05:46.319 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.319 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59238 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:05:46.319 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.319 14:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 [2024-12-10 14:11:11.057393] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:46.319 [2024-12-10 14:11:11.057832] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:05:46.578 [2024-12-10 14:11:11.196314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.578 [2024-12-10 14:11:11.230982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.578 [2024-12-10 14:11:11.231124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.578 [2024-12-10 14:11:11.231224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.578 [2024-12-10 14:11:11.231225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:46.578 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.578 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.578 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.578 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.578 POWER: Cannot set governor of lcore 0 to performance 00:05:46.578 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.578 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.578 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.578 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.578 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:46.578 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:46.578 POWER: Unable to set Power Management Environment for lcore 0 00:05:46.578 [2024-12-10 14:11:11.334432] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:46.578 [2024-12-10 14:11:11.334445] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:46.578 [2024-12-10 14:11:11.334454] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.578 [2024-12-10 14:11:11.334496] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.578 [2024-12-10 14:11:11.334504] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.578 [2024-12-10 14:11:11.334510] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.578 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.578 [2024-12-10 14:11:11.371163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.578 [2024-12-10 14:11:11.389795] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.578 14:11:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.578 14:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.578 ************************************ 00:05:46.578 START TEST scheduler_create_thread 00:05:46.578 ************************************ 00:05:46.578 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:46.578 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.578 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.578 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 2 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 3 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 4 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 5 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 6 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 7 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 8 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 9 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 10 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.837 14:11:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.213 14:11:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.213 14:11:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.213 14:11:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.213 14:11:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.213 14:11:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.589 ************************************ 00:05:49.589 END TEST scheduler_create_thread 00:05:49.589 ************************************ 00:05:49.589 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.589 00:05:49.589 real 0m2.612s 00:05:49.589 user 0m0.018s 00:05:49.589 sys 0m0.007s 00:05:49.589 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.589 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.589 14:11:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.589 14:11:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59238 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59238 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59238 00:05:49.589 killing process with pid 59238 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59238' 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59238 00:05:49.589 14:11:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59238 00:05:49.848 [2024-12-10 14:11:14.493277] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.848 00:05:49.848 real 0m3.826s 00:05:49.848 user 0m5.826s 00:05:49.848 sys 0m0.290s 00:05:49.848 14:11:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.848 ************************************ 00:05:49.848 END TEST event_scheduler 00:05:49.848 ************************************ 00:05:49.848 14:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.848 14:11:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.106 14:11:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.106 14:11:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.106 14:11:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.106 14:11:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.106 ************************************ 00:05:50.106 START TEST app_repeat 00:05:50.106 ************************************ 00:05:50.106 14:11:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:50.106 14:11:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.106 14:11:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.106 14:11:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.107 Process app_repeat pid: 59324 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59324 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59324' 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.107 spdk_app_start Round 0 00:05:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.107 14:11:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59324 /var/tmp/spdk-nbd.sock 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59324 ']' 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.107 14:11:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.107 [2024-12-10 14:11:14.726679] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:05:50.107 [2024-12-10 14:11:14.726941] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59324 ] 00:05:50.107 [2024-12-10 14:11:14.867545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.107 [2024-12-10 14:11:14.919510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.107 [2024-12-10 14:11:14.919527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.366 [2024-12-10 14:11:14.956288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.366 14:11:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.366 14:11:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.366 14:11:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.624 Malloc0 00:05:50.624 14:11:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.882 Malloc1 00:05:50.882 14:11:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.882 14:11:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.142 /dev/nbd0 00:05:51.142 14:11:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.142 14:11:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.142 1+0 records in 00:05:51.142 1+0 records out 00:05:51.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280222 s, 14.6 MB/s 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.142 14:11:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.142 14:11:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.142 14:11:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.142 14:11:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.406 /dev/nbd1 00:05:51.406 14:11:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.406 14:11:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.406 14:11:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.406 14:11:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.406 14:11:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.406 14:11:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.406 14:11:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.407 1+0 records in 00:05:51.407 1+0 records out 00:05:51.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272276 s, 15.0 MB/s 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.407 14:11:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.407 14:11:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.407 14:11:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.407 14:11:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.407 14:11:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.407 14:11:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.666 { 00:05:51.666 "nbd_device": "/dev/nbd0", 00:05:51.666 "bdev_name": "Malloc0" 00:05:51.666 }, 00:05:51.666 { 00:05:51.666 "nbd_device": "/dev/nbd1", 00:05:51.666 "bdev_name": "Malloc1" 00:05:51.666 } 00:05:51.666 ]' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.666 { 00:05:51.666 "nbd_device": "/dev/nbd0", 00:05:51.666 "bdev_name": "Malloc0" 00:05:51.666 }, 00:05:51.666 { 00:05:51.666 "nbd_device": "/dev/nbd1", 00:05:51.666 "bdev_name": "Malloc1" 00:05:51.666 } 00:05:51.666 ]' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.666 /dev/nbd1' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.666 /dev/nbd1' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.666 14:11:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.925 256+0 records in 00:05:51.925 256+0 records out 00:05:51.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793865 s, 132 MB/s 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.925 256+0 records in 00:05:51.925 256+0 records out 00:05:51.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243851 s, 43.0 MB/s 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.925 256+0 records in 00:05:51.925 256+0 records out 00:05:51.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232341 s, 45.1 MB/s 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.925 14:11:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.926 14:11:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.184 14:11:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.444 14:11:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.703 14:11:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.703 14:11:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.271 14:11:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.271 [2024-12-10 14:11:17.900400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.271 [2024-12-10 14:11:17.932174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.271 [2024-12-10 14:11:17.932189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.271 [2024-12-10 14:11:17.962134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.271 [2024-12-10 14:11:17.962251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.271 [2024-12-10 14:11:17.962264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.568 spdk_app_start Round 1 00:05:56.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.568 14:11:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.568 14:11:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.568 14:11:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59324 /var/tmp/spdk-nbd.sock 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59324 ']' 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.568 14:11:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.568 14:11:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.568 14:11:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.568 14:11:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.827 Malloc0 00:05:56.827 14:11:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.086 Malloc1 00:05:57.086 14:11:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.086 14:11:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.087 14:11:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.087 /dev/nbd0 00:05:57.346 14:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.346 14:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.346 1+0 records in 00:05:57.346 1+0 records out 00:05:57.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247845 s, 16.5 MB/s 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.346 14:11:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.346 14:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.346 14:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.346 14:11:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.606 /dev/nbd1 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.606 1+0 records in 00:05:57.606 1+0 records out 00:05:57.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394411 s, 10.4 MB/s 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.606 14:11:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.606 14:11:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.865 { 00:05:57.865 "nbd_device": "/dev/nbd0", 00:05:57.865 "bdev_name": "Malloc0" 00:05:57.865 }, 00:05:57.865 { 00:05:57.865 "nbd_device": "/dev/nbd1", 00:05:57.865 "bdev_name": "Malloc1" 00:05:57.865 } 00:05:57.865 ]' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.865 { 00:05:57.865 "nbd_device": "/dev/nbd0", 00:05:57.865 "bdev_name": "Malloc0" 00:05:57.865 }, 00:05:57.865 { 00:05:57.865 "nbd_device": "/dev/nbd1", 00:05:57.865 "bdev_name": "Malloc1" 00:05:57.865 } 00:05:57.865 ]' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.865 /dev/nbd1' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.865 /dev/nbd1' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.865 256+0 records in 00:05:57.865 256+0 records out 00:05:57.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105807 s, 99.1 MB/s 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.865 14:11:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.124 256+0 records in 00:05:58.124 256+0 records out 00:05:58.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252209 s, 41.6 MB/s 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.124 256+0 records in 00:05:58.124 256+0 records out 00:05:58.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239895 s, 43.7 MB/s 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.124 14:11:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.383 14:11:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.642 14:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.901 14:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.901 14:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.901 14:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.164 14:11:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.164 14:11:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.424 14:11:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.424 [2024-12-10 14:11:24.142783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.424 [2024-12-10 14:11:24.173123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.424 [2024-12-10 14:11:24.173134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.424 [2024-12-10 14:11:24.202146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.424 [2024-12-10 14:11:24.202264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.424 [2024-12-10 14:11:24.202276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.713 14:11:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.713 spdk_app_start Round 2 00:06:02.713 14:11:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.713 14:11:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59324 /var/tmp/spdk-nbd.sock 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59324 ']' 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.713 14:11:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.713 14:11:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.984 Malloc0 00:06:02.984 14:11:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.243 Malloc1 00:06:03.243 14:11:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.243 14:11:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.503 /dev/nbd0 00:06:03.503 14:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.503 14:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.503 1+0 records in 00:06:03.503 1+0 records out 00:06:03.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331562 s, 12.4 MB/s 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.503 14:11:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.503 14:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.503 14:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.503 14:11:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.762 /dev/nbd1 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.762 1+0 records in 00:06:03.762 1+0 records out 00:06:03.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324185 s, 12.6 MB/s 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.762 14:11:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.762 14:11:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.022 { 00:06:04.022 "nbd_device": "/dev/nbd0", 00:06:04.022 "bdev_name": "Malloc0" 00:06:04.022 }, 00:06:04.022 { 00:06:04.022 "nbd_device": "/dev/nbd1", 00:06:04.022 "bdev_name": "Malloc1" 00:06:04.022 } 00:06:04.022 ]' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.022 { 00:06:04.022 "nbd_device": "/dev/nbd0", 00:06:04.022 "bdev_name": "Malloc0" 00:06:04.022 }, 00:06:04.022 { 00:06:04.022 "nbd_device": "/dev/nbd1", 00:06:04.022 "bdev_name": "Malloc1" 00:06:04.022 } 00:06:04.022 ]' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.022 /dev/nbd1' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.022 /dev/nbd1' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.022 14:11:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.281 256+0 records in 00:06:04.281 256+0 records out 00:06:04.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072404 s, 145 MB/s 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.281 256+0 records in 00:06:04.281 256+0 records out 00:06:04.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219598 s, 47.7 MB/s 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.281 256+0 records in 00:06:04.281 256+0 records out 00:06:04.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245005 s, 42.8 MB/s 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.281 14:11:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.540 14:11:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.799 14:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.058 14:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.059 14:11:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.059 14:11:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.627 14:11:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.627 [2024-12-10 14:11:30.295549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.627 [2024-12-10 14:11:30.329069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.627 [2024-12-10 14:11:30.329080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.627 [2024-12-10 14:11:30.357607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.627 [2024-12-10 14:11:30.357714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.627 [2024-12-10 14:11:30.357726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.967 14:11:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59324 /var/tmp/spdk-nbd.sock 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59324 ']' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:08.967 14:11:33 event.app_repeat -- event/event.sh@39 -- # killprocess 59324 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59324 ']' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59324 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59324 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.967 killing process with pid 59324 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59324' 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59324 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59324 00:06:08.967 spdk_app_start is called in Round 0. 00:06:08.967 Shutdown signal received, stop current app iteration 00:06:08.967 Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 reinitialization... 00:06:08.967 spdk_app_start is called in Round 1. 00:06:08.967 Shutdown signal received, stop current app iteration 00:06:08.967 Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 reinitialization... 00:06:08.967 spdk_app_start is called in Round 2. 00:06:08.967 Shutdown signal received, stop current app iteration 00:06:08.967 Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 reinitialization... 00:06:08.967 spdk_app_start is called in Round 3. 00:06:08.967 Shutdown signal received, stop current app iteration 00:06:08.967 14:11:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.967 14:11:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.967 00:06:08.967 real 0m18.937s 00:06:08.967 user 0m43.656s 00:06:08.967 sys 0m2.660s 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.967 14:11:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.967 ************************************ 00:06:08.967 END TEST app_repeat 00:06:08.967 ************************************ 00:06:08.967 14:11:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.967 14:11:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.967 14:11:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.967 14:11:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.967 14:11:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.967 ************************************ 00:06:08.967 START TEST cpu_locks 00:06:08.967 ************************************ 00:06:08.967 14:11:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.967 * Looking for test storage... 00:06:08.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.967 14:11:33 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.967 14:11:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.967 14:11:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.227 14:11:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.227 --rc genhtml_branch_coverage=1 00:06:09.227 --rc genhtml_function_coverage=1 00:06:09.227 --rc genhtml_legend=1 00:06:09.227 --rc geninfo_all_blocks=1 00:06:09.227 --rc geninfo_unexecuted_blocks=1 00:06:09.227 00:06:09.227 ' 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.227 --rc genhtml_branch_coverage=1 00:06:09.227 --rc genhtml_function_coverage=1 00:06:09.227 --rc genhtml_legend=1 00:06:09.227 --rc geninfo_all_blocks=1 00:06:09.227 --rc geninfo_unexecuted_blocks=1 00:06:09.227 00:06:09.227 ' 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.227 --rc genhtml_branch_coverage=1 00:06:09.227 --rc genhtml_function_coverage=1 00:06:09.227 --rc genhtml_legend=1 00:06:09.227 --rc geninfo_all_blocks=1 00:06:09.227 --rc geninfo_unexecuted_blocks=1 00:06:09.227 00:06:09.227 ' 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.227 --rc genhtml_branch_coverage=1 00:06:09.227 --rc genhtml_function_coverage=1 00:06:09.227 --rc genhtml_legend=1 00:06:09.227 --rc geninfo_all_blocks=1 00:06:09.227 --rc geninfo_unexecuted_blocks=1 00:06:09.227 00:06:09.227 ' 00:06:09.227 14:11:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.227 14:11:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.227 14:11:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.227 14:11:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.227 14:11:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.227 ************************************ 00:06:09.227 START TEST default_locks 00:06:09.227 ************************************ 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59763 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59763 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59763 ']' 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.227 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.228 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.228 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.228 14:11:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.228 [2024-12-10 14:11:33.907679] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:09.228 [2024-12-10 14:11:33.907764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:06:09.228 [2024-12-10 14:11:34.045272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.487 [2024-12-10 14:11:34.077194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.487 [2024-12-10 14:11:34.115367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.487 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.487 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:09.487 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59763 00:06:09.487 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59763 00:06:09.487 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.746 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59763 00:06:09.746 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59763 ']' 00:06:09.746 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59763 00:06:09.746 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59763 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.005 killing process with pid 59763 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59763' 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59763 00:06:10.005 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59763 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59763 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59763 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59763 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59763 ']' 00:06:10.265 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59763) - No such process 00:06:10.266 ERROR: process (pid: 59763) is no longer running 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.266 00:06:10.266 real 0m1.006s 00:06:10.266 user 0m1.086s 00:06:10.266 sys 0m0.367s 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.266 14:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 ************************************ 00:06:10.266 END TEST default_locks 00:06:10.266 ************************************ 00:06:10.266 14:11:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.266 14:11:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.266 14:11:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.266 14:11:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 ************************************ 00:06:10.266 START TEST default_locks_via_rpc 00:06:10.266 ************************************ 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59802 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59802 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.266 14:11:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 [2024-12-10 14:11:34.970468] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:10.266 [2024-12-10 14:11:34.970578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:06:10.524 [2024-12-10 14:11:35.111657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.524 [2024-12-10 14:11:35.141892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.524 [2024-12-10 14:11:35.182365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.524 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.524 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.524 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59802 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59802 00:06:10.525 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59802 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59802 ']' 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59802 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59802 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.092 killing process with pid 59802 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59802' 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59802 00:06:11.092 14:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59802 00:06:11.351 00:06:11.351 real 0m1.181s 00:06:11.351 user 0m1.254s 00:06:11.351 sys 0m0.429s 00:06:11.351 14:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.351 14:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.351 ************************************ 00:06:11.351 END TEST default_locks_via_rpc 00:06:11.351 ************************************ 00:06:11.351 14:11:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.351 14:11:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.351 14:11:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.351 14:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.351 ************************************ 00:06:11.351 START TEST non_locking_app_on_locked_coremask 00:06:11.351 ************************************ 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59840 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59840 /var/tmp/spdk.sock 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59840 ']' 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.351 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.609 [2024-12-10 14:11:36.210147] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:11.609 [2024-12-10 14:11:36.210258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:06:11.609 [2024-12-10 14:11:36.350082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.609 [2024-12-10 14:11:36.382916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.609 [2024-12-10 14:11:36.425801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59854 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59854 /var/tmp/spdk2.sock 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59854 ']' 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.869 14:11:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.869 [2024-12-10 14:11:36.609492] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:11.869 [2024-12-10 14:11:36.609600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:06:12.128 [2024-12-10 14:11:36.761808] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.128 [2024-12-10 14:11:36.761865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.128 [2024-12-10 14:11:36.823159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.128 [2024-12-10 14:11:36.900213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.065 14:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.065 14:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.065 14:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59840 00:06:13.065 14:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59840 00:06:13.065 14:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59840 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59840 ']' 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59840 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59840 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.001 killing process with pid 59840 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59840' 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59840 00:06:14.001 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59840 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59854 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59854 ']' 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59854 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59854 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.260 killing process with pid 59854 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59854' 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59854 00:06:14.260 14:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59854 00:06:14.519 00:06:14.519 real 0m3.054s 00:06:14.519 user 0m3.612s 00:06:14.519 sys 0m0.865s 00:06:14.519 14:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.519 14:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 ************************************ 00:06:14.519 END TEST non_locking_app_on_locked_coremask 00:06:14.519 ************************************ 00:06:14.519 14:11:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.519 14:11:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.519 14:11:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.519 14:11:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 ************************************ 00:06:14.519 START TEST locking_app_on_unlocked_coremask 00:06:14.519 ************************************ 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59911 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59911 /var/tmp/spdk.sock 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59911 ']' 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.519 14:11:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.519 [2024-12-10 14:11:39.329192] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:14.519 [2024-12-10 14:11:39.329302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:06:14.778 [2024-12-10 14:11:39.471364] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.778 [2024-12-10 14:11:39.471414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.778 [2024-12-10 14:11:39.500884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.778 [2024-12-10 14:11:39.537433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59927 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59927 /var/tmp/spdk2.sock 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59927 ']' 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.714 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.714 [2024-12-10 14:11:40.340221] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:15.714 [2024-12-10 14:11:40.340332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59927 ] 00:06:15.714 [2024-12-10 14:11:40.495581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.973 [2024-12-10 14:11:40.557560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.973 [2024-12-10 14:11:40.630208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.232 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.232 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.232 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59927 00:06:16.232 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59927 00:06:16.232 14:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59911 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59911 ']' 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59911 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59911 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.799 killing process with pid 59911 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59911' 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59911 00:06:16.799 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59911 00:06:17.058 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59927 00:06:17.058 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59927 ']' 00:06:17.058 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59927 00:06:17.058 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59927 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.317 killing process with pid 59927 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59927' 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59927 00:06:17.317 14:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59927 00:06:17.576 00:06:17.576 real 0m2.896s 00:06:17.576 user 0m3.348s 00:06:17.576 sys 0m0.754s 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 END TEST locking_app_on_unlocked_coremask 00:06:17.576 ************************************ 00:06:17.576 14:11:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.576 14:11:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.576 14:11:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.576 14:11:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 START TEST locking_app_on_locked_coremask 00:06:17.576 ************************************ 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59981 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59981 /var/tmp/spdk.sock 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59981 ']' 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.576 14:11:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 [2024-12-10 14:11:42.257082] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:17.576 [2024-12-10 14:11:42.257184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:06:17.576 [2024-12-10 14:11:42.395541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.835 [2024-12-10 14:11:42.425691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.835 [2024-12-10 14:11:42.462472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59997 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59997 /var/tmp/spdk2.sock 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59997 /var/tmp/spdk2.sock 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59997 /var/tmp/spdk2.sock 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59997 ']' 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.403 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.403 [2024-12-10 14:11:43.233288] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:18.403 [2024-12-10 14:11:43.233407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59997 ] 00:06:18.661 [2024-12-10 14:11:43.389438] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59981 has claimed it. 00:06:18.661 [2024-12-10 14:11:43.389518] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.228 ERROR: process (pid: 59997) is no longer running 00:06:19.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59997) - No such process 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59981 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.228 14:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59981 ']' 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.487 killing process with pid 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59981' 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59981 00:06:19.487 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59981 00:06:19.746 00:06:19.746 real 0m2.278s 00:06:19.746 user 0m2.739s 00:06:19.746 sys 0m0.454s 00:06:19.746 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.746 14:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.746 ************************************ 00:06:19.746 END TEST locking_app_on_locked_coremask 00:06:19.746 ************************************ 00:06:19.746 14:11:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.746 14:11:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.746 14:11:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.746 14:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.746 ************************************ 00:06:19.746 START TEST locking_overlapped_coremask 00:06:19.746 ************************************ 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60043 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60043 /var/tmp/spdk.sock 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60043 ']' 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.746 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.005 [2024-12-10 14:11:44.591251] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:20.005 [2024-12-10 14:11:44.591319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60043 ] 00:06:20.005 [2024-12-10 14:11:44.731189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.005 [2024-12-10 14:11:44.763775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.005 [2024-12-10 14:11:44.763894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.005 [2024-12-10 14:11:44.763898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.005 [2024-12-10 14:11:44.801247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60053 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60053 /var/tmp/spdk2.sock 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60053 /var/tmp/spdk2.sock 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60053 /var/tmp/spdk2.sock 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60053 ']' 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.264 14:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 [2024-12-10 14:11:45.004191] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:20.264 [2024-12-10 14:11:45.004823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:06:20.523 [2024-12-10 14:11:45.164135] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60043 has claimed it. 00:06:20.523 [2024-12-10 14:11:45.164189] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60053) - No such process 00:06:21.091 ERROR: process (pid: 60053) is no longer running 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60043 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60043 ']' 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60043 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60043 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60043' 00:06:21.091 killing process with pid 60043 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60043 00:06:21.091 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60043 00:06:21.350 00:06:21.350 real 0m1.451s 00:06:21.350 user 0m4.067s 00:06:21.350 sys 0m0.300s 00:06:21.350 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.350 14:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.350 ************************************ 00:06:21.350 END TEST locking_overlapped_coremask 00:06:21.350 ************************************ 00:06:21.350 14:11:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.350 14:11:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.350 14:11:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.350 14:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.350 ************************************ 00:06:21.350 START TEST locking_overlapped_coremask_via_rpc 00:06:21.350 ************************************ 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60093 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60093 /var/tmp/spdk.sock 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60093 ']' 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.350 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.350 [2024-12-10 14:11:46.106454] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:21.350 [2024-12-10 14:11:46.106595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60093 ] 00:06:21.618 [2024-12-10 14:11:46.252013] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.618 [2024-12-10 14:11:46.252061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.618 [2024-12-10 14:11:46.282663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.618 [2024-12-10 14:11:46.282803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.619 [2024-12-10 14:11:46.282807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.619 [2024-12-10 14:11:46.323916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60098 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60098 /var/tmp/spdk2.sock 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60098 ']' 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.619 14:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.914 [2024-12-10 14:11:46.512614] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:21.914 [2024-12-10 14:11:46.512706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60098 ] 00:06:21.914 [2024-12-10 14:11:46.675300] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.914 [2024-12-10 14:11:46.679027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.914 [2024-12-10 14:11:46.743833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.914 [2024-12-10 14:11:46.743911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.914 [2024-12-10 14:11:46.743914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.177 [2024-12-10 14:11:46.827465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 [2024-12-10 14:11:47.526134] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60093 has claimed it. 00:06:22.745 request: 00:06:22.745 { 00:06:22.745 "method": "framework_enable_cpumask_locks", 00:06:22.745 "req_id": 1 00:06:22.745 } 00:06:22.745 Got JSON-RPC error response 00:06:22.745 response: 00:06:22.745 { 00:06:22.745 "code": -32603, 00:06:22.745 "message": "Failed to claim CPU core: 2" 00:06:22.745 } 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60093 /var/tmp/spdk.sock 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60093 ']' 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.745 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60098 /var/tmp/spdk2.sock 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60098 ']' 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.004 14:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.262 00:06:23.262 real 0m2.040s 00:06:23.262 user 0m1.214s 00:06:23.262 sys 0m0.166s 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.262 14:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.262 ************************************ 00:06:23.262 END TEST locking_overlapped_coremask_via_rpc 00:06:23.262 ************************************ 00:06:23.521 14:11:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.521 14:11:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60093 ]] 00:06:23.521 14:11:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60093 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60093 ']' 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60093 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60093 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.521 killing process with pid 60093 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60093' 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60093 00:06:23.521 14:11:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60093 00:06:23.781 14:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60098 ]] 00:06:23.781 14:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60098 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60098 ']' 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60098 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60098 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60098' 00:06:23.781 killing process with pid 60098 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60098 00:06:23.781 14:11:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60098 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60093 ]] 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60093 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60093 ']' 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60093 00:06:24.040 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60093) - No such process 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60093 is not found' 00:06:24.040 Process with pid 60093 is not found 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60098 ]] 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60098 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60098 ']' 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60098 00:06:24.040 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60098) - No such process 00:06:24.040 Process with pid 60098 is not found 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60098 is not found' 00:06:24.040 14:11:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.040 00:06:24.040 real 0m14.982s 00:06:24.040 user 0m27.574s 00:06:24.040 sys 0m4.007s 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.040 14:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 ************************************ 00:06:24.040 END TEST cpu_locks 00:06:24.040 ************************************ 00:06:24.040 ************************************ 00:06:24.040 END TEST event 00:06:24.040 ************************************ 00:06:24.040 00:06:24.040 real 0m41.972s 00:06:24.040 user 1m23.535s 00:06:24.040 sys 0m7.345s 00:06:24.040 14:11:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.040 14:11:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 14:11:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.040 14:11:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.040 14:11:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.040 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 ************************************ 00:06:24.040 START TEST thread 00:06:24.040 ************************************ 00:06:24.040 14:11:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.040 * Looking for test storage... 00:06:24.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.040 14:11:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.040 14:11:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.041 14:11:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.300 14:11:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.300 14:11:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.300 14:11:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.300 14:11:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.300 14:11:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.300 14:11:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.300 14:11:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.300 14:11:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.300 14:11:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.300 14:11:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.300 14:11:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.300 14:11:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.300 14:11:48 thread -- scripts/common.sh@345 -- # : 1 00:06:24.300 14:11:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.300 14:11:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.300 14:11:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.300 14:11:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.300 14:11:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.300 14:11:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.300 14:11:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.300 14:11:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.300 14:11:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.300 14:11:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.300 14:11:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.300 14:11:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.300 14:11:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.300 14:11:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.300 14:11:48 thread -- scripts/common.sh@368 -- # return 0 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.300 --rc genhtml_branch_coverage=1 00:06:24.300 --rc genhtml_function_coverage=1 00:06:24.300 --rc genhtml_legend=1 00:06:24.300 --rc geninfo_all_blocks=1 00:06:24.300 --rc geninfo_unexecuted_blocks=1 00:06:24.300 00:06:24.300 ' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.300 --rc genhtml_branch_coverage=1 00:06:24.300 --rc genhtml_function_coverage=1 00:06:24.300 --rc genhtml_legend=1 00:06:24.300 --rc geninfo_all_blocks=1 00:06:24.300 --rc geninfo_unexecuted_blocks=1 00:06:24.300 00:06:24.300 ' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.300 --rc genhtml_branch_coverage=1 00:06:24.300 --rc genhtml_function_coverage=1 00:06:24.300 --rc genhtml_legend=1 00:06:24.300 --rc geninfo_all_blocks=1 00:06:24.300 --rc geninfo_unexecuted_blocks=1 00:06:24.300 00:06:24.300 ' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.300 --rc genhtml_branch_coverage=1 00:06:24.300 --rc genhtml_function_coverage=1 00:06:24.300 --rc genhtml_legend=1 00:06:24.300 --rc geninfo_all_blocks=1 00:06:24.300 --rc geninfo_unexecuted_blocks=1 00:06:24.300 00:06:24.300 ' 00:06:24.300 14:11:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.300 14:11:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.300 ************************************ 00:06:24.300 START TEST thread_poller_perf 00:06:24.300 ************************************ 00:06:24.300 14:11:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.300 [2024-12-10 14:11:48.981730] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:24.300 [2024-12-10 14:11:48.981814] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:06:24.300 [2024-12-10 14:11:49.123021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.559 [2024-12-10 14:11:49.150991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.559 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.496 [2024-12-10T14:11:50.333Z] ====================================== 00:06:25.496 [2024-12-10T14:11:50.333Z] busy:2205941880 (cyc) 00:06:25.496 [2024-12-10T14:11:50.333Z] total_run_count: 374000 00:06:25.496 [2024-12-10T14:11:50.333Z] tsc_hz: 2200000000 (cyc) 00:06:25.496 [2024-12-10T14:11:50.333Z] ====================================== 00:06:25.496 [2024-12-10T14:11:50.333Z] poller_cost: 5898 (cyc), 2680 (nsec) 00:06:25.496 00:06:25.496 real 0m1.227s 00:06:25.496 user 0m1.089s 00:06:25.496 sys 0m0.032s 00:06:25.496 14:11:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.496 ************************************ 00:06:25.496 END TEST thread_poller_perf 00:06:25.496 ************************************ 00:06:25.496 14:11:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.496 14:11:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.496 14:11:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.496 14:11:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.496 14:11:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.496 ************************************ 00:06:25.496 START TEST thread_poller_perf 00:06:25.496 ************************************ 00:06:25.496 14:11:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.496 [2024-12-10 14:11:50.262562] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:25.496 [2024-12-10 14:11:50.262667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:06:25.756 [2024-12-10 14:11:50.406828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.756 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.756 [2024-12-10 14:11:50.433348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.692 [2024-12-10T14:11:51.529Z] ====================================== 00:06:26.692 [2024-12-10T14:11:51.529Z] busy:2201884826 (cyc) 00:06:26.692 [2024-12-10T14:11:51.529Z] total_run_count: 4529000 00:06:26.692 [2024-12-10T14:11:51.529Z] tsc_hz: 2200000000 (cyc) 00:06:26.692 [2024-12-10T14:11:51.529Z] ====================================== 00:06:26.692 [2024-12-10T14:11:51.529Z] poller_cost: 486 (cyc), 220 (nsec) 00:06:26.692 00:06:26.692 real 0m1.228s 00:06:26.692 user 0m1.092s 00:06:26.692 sys 0m0.031s 00:06:26.692 14:11:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.692 ************************************ 00:06:26.692 END TEST thread_poller_perf 00:06:26.692 ************************************ 00:06:26.692 14:11:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.692 14:11:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.692 00:06:26.692 real 0m2.753s 00:06:26.692 user 0m2.345s 00:06:26.692 sys 0m0.199s 00:06:26.692 14:11:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.692 14:11:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.692 ************************************ 00:06:26.692 END TEST thread 00:06:26.692 ************************************ 00:06:26.951 14:11:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.951 14:11:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.951 14:11:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.951 14:11:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.951 14:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:26.951 ************************************ 00:06:26.951 START TEST app_cmdline 00:06:26.951 ************************************ 00:06:26.951 14:11:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.951 * Looking for test storage... 00:06:26.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.951 14:11:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.951 14:11:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.951 14:11:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.951 14:11:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.951 14:11:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.952 14:11:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.952 --rc genhtml_branch_coverage=1 00:06:26.952 --rc genhtml_function_coverage=1 00:06:26.952 --rc genhtml_legend=1 00:06:26.952 --rc geninfo_all_blocks=1 00:06:26.952 --rc geninfo_unexecuted_blocks=1 00:06:26.952 00:06:26.952 ' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.952 --rc genhtml_branch_coverage=1 00:06:26.952 --rc genhtml_function_coverage=1 00:06:26.952 --rc genhtml_legend=1 00:06:26.952 --rc geninfo_all_blocks=1 00:06:26.952 --rc geninfo_unexecuted_blocks=1 00:06:26.952 00:06:26.952 ' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.952 --rc genhtml_branch_coverage=1 00:06:26.952 --rc genhtml_function_coverage=1 00:06:26.952 --rc genhtml_legend=1 00:06:26.952 --rc geninfo_all_blocks=1 00:06:26.952 --rc geninfo_unexecuted_blocks=1 00:06:26.952 00:06:26.952 ' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.952 --rc genhtml_branch_coverage=1 00:06:26.952 --rc genhtml_function_coverage=1 00:06:26.952 --rc genhtml_legend=1 00:06:26.952 --rc geninfo_all_blocks=1 00:06:26.952 --rc geninfo_unexecuted_blocks=1 00:06:26.952 00:06:26.952 ' 00:06:26.952 14:11:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.952 14:11:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60347 00:06:26.952 14:11:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.952 14:11:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60347 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60347 ']' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.952 14:11:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.210 [2024-12-10 14:11:51.809943] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:27.210 [2024-12-10 14:11:51.810078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ] 00:06:27.210 [2024-12-10 14:11:51.955749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.210 [2024-12-10 14:11:51.984149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.210 [2024-12-10 14:11:52.020519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.469 14:11:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.469 14:11:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:27.469 14:11:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.728 { 00:06:27.728 "version": "SPDK v25.01-pre git sha1 e576aacaf", 00:06:27.728 "fields": { 00:06:27.728 "major": 25, 00:06:27.728 "minor": 1, 00:06:27.728 "patch": 0, 00:06:27.728 "suffix": "-pre", 00:06:27.729 "commit": "e576aacaf" 00:06:27.729 } 00:06:27.729 } 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.729 14:11:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:27.729 14:11:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.988 request: 00:06:27.988 { 00:06:27.988 "method": "env_dpdk_get_mem_stats", 00:06:27.988 "req_id": 1 00:06:27.988 } 00:06:27.988 Got JSON-RPC error response 00:06:27.988 response: 00:06:27.988 { 00:06:27.988 "code": -32601, 00:06:27.988 "message": "Method not found" 00:06:27.988 } 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.988 14:11:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60347 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60347 ']' 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60347 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60347 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.988 killing process with pid 60347 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60347' 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 60347 00:06:27.988 14:11:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 60347 00:06:28.247 00:06:28.247 real 0m1.404s 00:06:28.247 user 0m1.869s 00:06:28.247 sys 0m0.337s 00:06:28.247 14:11:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.247 14:11:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 ************************************ 00:06:28.247 END TEST app_cmdline 00:06:28.247 ************************************ 00:06:28.247 14:11:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.247 14:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.247 14:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.247 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 ************************************ 00:06:28.247 START TEST version 00:06:28.247 ************************************ 00:06:28.247 14:11:53 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.507 * Looking for test storage... 00:06:28.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.507 14:11:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.507 14:11:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.507 14:11:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.507 14:11:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.507 14:11:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.507 14:11:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.507 14:11:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.507 14:11:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.507 14:11:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.507 14:11:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.507 14:11:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.507 14:11:53 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.507 14:11:53 version -- scripts/common.sh@345 -- # : 1 00:06:28.507 14:11:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.507 14:11:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.507 14:11:53 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.507 14:11:53 version -- scripts/common.sh@353 -- # local d=1 00:06:28.507 14:11:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.507 14:11:53 version -- scripts/common.sh@355 -- # echo 1 00:06:28.507 14:11:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.507 14:11:53 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.507 14:11:53 version -- scripts/common.sh@353 -- # local d=2 00:06:28.507 14:11:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.507 14:11:53 version -- scripts/common.sh@355 -- # echo 2 00:06:28.507 14:11:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.507 14:11:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.507 14:11:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.507 14:11:53 version -- scripts/common.sh@368 -- # return 0 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.507 --rc genhtml_branch_coverage=1 00:06:28.507 --rc genhtml_function_coverage=1 00:06:28.507 --rc genhtml_legend=1 00:06:28.507 --rc geninfo_all_blocks=1 00:06:28.507 --rc geninfo_unexecuted_blocks=1 00:06:28.507 00:06:28.507 ' 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.507 --rc genhtml_branch_coverage=1 00:06:28.507 --rc genhtml_function_coverage=1 00:06:28.507 --rc genhtml_legend=1 00:06:28.507 --rc geninfo_all_blocks=1 00:06:28.507 --rc geninfo_unexecuted_blocks=1 00:06:28.507 00:06:28.507 ' 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.507 --rc genhtml_branch_coverage=1 00:06:28.507 --rc genhtml_function_coverage=1 00:06:28.507 --rc genhtml_legend=1 00:06:28.507 --rc geninfo_all_blocks=1 00:06:28.507 --rc geninfo_unexecuted_blocks=1 00:06:28.507 00:06:28.507 ' 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.507 --rc genhtml_branch_coverage=1 00:06:28.507 --rc genhtml_function_coverage=1 00:06:28.507 --rc genhtml_legend=1 00:06:28.507 --rc geninfo_all_blocks=1 00:06:28.507 --rc geninfo_unexecuted_blocks=1 00:06:28.507 00:06:28.507 ' 00:06:28.507 14:11:53 version -- app/version.sh@17 -- # get_header_version major 00:06:28.507 14:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # cut -f2 00:06:28.507 14:11:53 version -- app/version.sh@17 -- # major=25 00:06:28.507 14:11:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.507 14:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # cut -f2 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.507 14:11:53 version -- app/version.sh@18 -- # minor=1 00:06:28.507 14:11:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.507 14:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # cut -f2 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.507 14:11:53 version -- app/version.sh@19 -- # patch=0 00:06:28.507 14:11:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.507 14:11:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # cut -f2 00:06:28.507 14:11:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.507 14:11:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.507 14:11:53 version -- app/version.sh@22 -- # version=25.1 00:06:28.507 14:11:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.507 14:11:53 version -- app/version.sh@28 -- # version=25.1rc0 00:06:28.507 14:11:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.507 14:11:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.507 14:11:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.507 14:11:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.507 00:06:28.507 real 0m0.273s 00:06:28.507 user 0m0.186s 00:06:28.507 sys 0m0.120s 00:06:28.507 14:11:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.507 14:11:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.507 ************************************ 00:06:28.507 END TEST version 00:06:28.507 ************************************ 00:06:28.507 14:11:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.507 14:11:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:28.507 14:11:53 -- spdk/autotest.sh@194 -- # uname -s 00:06:28.507 14:11:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:28.507 14:11:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.507 14:11:53 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:28.507 14:11:53 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:28.507 14:11:53 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:28.507 14:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.507 14:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.507 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.767 ************************************ 00:06:28.767 START TEST spdk_dd 00:06:28.767 ************************************ 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:28.767 * Looking for test storage... 00:06:28.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.767 --rc genhtml_branch_coverage=1 00:06:28.767 --rc genhtml_function_coverage=1 00:06:28.767 --rc genhtml_legend=1 00:06:28.767 --rc geninfo_all_blocks=1 00:06:28.767 --rc geninfo_unexecuted_blocks=1 00:06:28.767 00:06:28.767 ' 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.767 --rc genhtml_branch_coverage=1 00:06:28.767 --rc genhtml_function_coverage=1 00:06:28.767 --rc genhtml_legend=1 00:06:28.767 --rc geninfo_all_blocks=1 00:06:28.767 --rc geninfo_unexecuted_blocks=1 00:06:28.767 00:06:28.767 ' 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.767 --rc genhtml_branch_coverage=1 00:06:28.767 --rc genhtml_function_coverage=1 00:06:28.767 --rc genhtml_legend=1 00:06:28.767 --rc geninfo_all_blocks=1 00:06:28.767 --rc geninfo_unexecuted_blocks=1 00:06:28.767 00:06:28.767 ' 00:06:28.767 14:11:53 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.767 --rc genhtml_branch_coverage=1 00:06:28.767 --rc genhtml_function_coverage=1 00:06:28.767 --rc genhtml_legend=1 00:06:28.767 --rc geninfo_all_blocks=1 00:06:28.767 --rc geninfo_unexecuted_blocks=1 00:06:28.767 00:06:28.767 ' 00:06:28.767 14:11:53 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.767 14:11:53 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.767 14:11:53 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.767 14:11:53 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.767 14:11:53 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.767 14:11:53 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:28.767 14:11:53 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.767 14:11:53 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:29.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.287 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:29.287 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:29.287 14:11:53 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:29.287 14:11:53 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:29.287 14:11:53 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:29.287 14:11:53 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:29.287 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:29.288 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:29.289 * spdk_dd linked to liburing 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:29.289 14:11:53 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:29.289 14:11:53 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:29.289 14:11:53 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:29.289 14:11:54 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:29.289 14:11:54 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:29.289 14:11:54 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:29.289 14:11:54 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:29.289 14:11:54 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:29.289 14:11:54 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:29.289 14:11:54 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:29.289 14:11:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:29.289 14:11:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.289 14:11:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:29.289 ************************************ 00:06:29.289 START TEST spdk_dd_basic_rw 00:06:29.289 ************************************ 00:06:29.289 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:29.289 * Looking for test storage... 00:06:29.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:29.289 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.289 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.289 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.549 14:11:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:29.550 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:29.811 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:29.811 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.812 ************************************ 00:06:29.812 START TEST dd_bs_lt_native_bs 00:06:29.812 ************************************ 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.812 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:29.812 { 00:06:29.812 "subsystems": [ 00:06:29.812 { 00:06:29.812 "subsystem": "bdev", 00:06:29.812 "config": [ 00:06:29.812 { 00:06:29.812 "params": { 00:06:29.812 "trtype": "pcie", 00:06:29.812 "traddr": "0000:00:10.0", 00:06:29.812 "name": "Nvme0" 00:06:29.812 }, 00:06:29.812 "method": "bdev_nvme_attach_controller" 00:06:29.812 }, 00:06:29.812 { 00:06:29.812 "method": "bdev_wait_for_examine" 00:06:29.812 } 00:06:29.812 ] 00:06:29.812 } 00:06:29.812 ] 00:06:29.812 } 00:06:29.812 [2024-12-10 14:11:54.540549] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:29.812 [2024-12-10 14:11:54.541049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:06:30.071 [2024-12-10 14:11:54.691259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.071 [2024-12-10 14:11:54.733457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.071 [2024-12-10 14:11:54.768658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.071 [2024-12-10 14:11:54.864751] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:30.071 [2024-12-10 14:11:54.864823] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.330 [2024-12-10 14:11:54.944479] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:30.330 14:11:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.330 00:06:30.330 real 0m0.519s 00:06:30.330 user 0m0.351s 00:06:30.330 sys 0m0.123s 00:06:30.330 ************************************ 00:06:30.330 END TEST dd_bs_lt_native_bs 00:06:30.330 ************************************ 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 ************************************ 00:06:30.330 START TEST dd_rw 00:06:30.330 ************************************ 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:30.330 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:30.895 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.895 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.895 14:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 [2024-12-10 14:11:55.589578] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:30.895 [2024-12-10 14:11:55.590078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:06:30.895 { 00:06:30.895 "subsystems": [ 00:06:30.895 { 00:06:30.895 "subsystem": "bdev", 00:06:30.895 "config": [ 00:06:30.895 { 00:06:30.895 "params": { 00:06:30.895 "trtype": "pcie", 00:06:30.895 "traddr": "0000:00:10.0", 00:06:30.895 "name": "Nvme0" 00:06:30.895 }, 00:06:30.895 "method": "bdev_nvme_attach_controller" 00:06:30.895 }, 00:06:30.895 { 00:06:30.895 "method": "bdev_wait_for_examine" 00:06:30.895 } 00:06:30.895 ] 00:06:30.895 } 00:06:30.895 ] 00:06:30.895 } 00:06:31.154 [2024-12-10 14:11:55.735851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.154 [2024-12-10 14:11:55.767857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.154 [2024-12-10 14:11:55.799693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.154  [2024-12-10T14:11:56.251Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:31.414 00:06:31.414 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:31.414 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:31.414 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.414 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.414 [2024-12-10 14:11:56.065176] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:31.414 [2024-12-10 14:11:56.065267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:06:31.414 { 00:06:31.414 "subsystems": [ 00:06:31.414 { 00:06:31.414 "subsystem": "bdev", 00:06:31.414 "config": [ 00:06:31.414 { 00:06:31.414 "params": { 00:06:31.414 "trtype": "pcie", 00:06:31.414 "traddr": "0000:00:10.0", 00:06:31.414 "name": "Nvme0" 00:06:31.414 }, 00:06:31.414 "method": "bdev_nvme_attach_controller" 00:06:31.414 }, 00:06:31.414 { 00:06:31.414 "method": "bdev_wait_for_examine" 00:06:31.414 } 00:06:31.414 ] 00:06:31.414 } 00:06:31.414 ] 00:06:31.414 } 00:06:31.414 [2024-12-10 14:11:56.209796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.414 [2024-12-10 14:11:56.236442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.673 [2024-12-10 14:11:56.263982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.673  [2024-12-10T14:11:56.510Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:31.673 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.673 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.932 [2024-12-10 14:11:56.530468] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:31.932 [2024-12-10 14:11:56.530910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:06:31.932 { 00:06:31.932 "subsystems": [ 00:06:31.932 { 00:06:31.932 "subsystem": "bdev", 00:06:31.932 "config": [ 00:06:31.932 { 00:06:31.932 "params": { 00:06:31.932 "trtype": "pcie", 00:06:31.932 "traddr": "0000:00:10.0", 00:06:31.932 "name": "Nvme0" 00:06:31.932 }, 00:06:31.932 "method": "bdev_nvme_attach_controller" 00:06:31.932 }, 00:06:31.932 { 00:06:31.932 "method": "bdev_wait_for_examine" 00:06:31.932 } 00:06:31.932 ] 00:06:31.932 } 00:06:31.932 ] 00:06:31.932 } 00:06:31.932 [2024-12-10 14:11:56.675025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.932 [2024-12-10 14:11:56.708889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.932 [2024-12-10 14:11:56.737768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.192  [2024-12-10T14:11:57.029Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:32.192 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:32.192 14:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.759 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:32.759 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:32.759 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.759 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.759 [2024-12-10 14:11:57.500515] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:32.759 [2024-12-10 14:11:57.500626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:06:32.759 { 00:06:32.759 "subsystems": [ 00:06:32.759 { 00:06:32.759 "subsystem": "bdev", 00:06:32.759 "config": [ 00:06:32.759 { 00:06:32.759 "params": { 00:06:32.759 "trtype": "pcie", 00:06:32.759 "traddr": "0000:00:10.0", 00:06:32.759 "name": "Nvme0" 00:06:32.759 }, 00:06:32.759 "method": "bdev_nvme_attach_controller" 00:06:32.759 }, 00:06:32.759 { 00:06:32.759 "method": "bdev_wait_for_examine" 00:06:32.759 } 00:06:32.759 ] 00:06:32.759 } 00:06:32.759 ] 00:06:32.759 } 00:06:33.018 [2024-12-10 14:11:57.646357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.018 [2024-12-10 14:11:57.674046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.018 [2024-12-10 14:11:57.702713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.018  [2024-12-10T14:11:58.114Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:33.277 00:06:33.277 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:33.277 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.277 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.277 14:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.277 [2024-12-10 14:11:57.975333] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:33.277 [2024-12-10 14:11:57.975426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60778 ] 00:06:33.277 { 00:06:33.277 "subsystems": [ 00:06:33.277 { 00:06:33.277 "subsystem": "bdev", 00:06:33.277 "config": [ 00:06:33.277 { 00:06:33.277 "params": { 00:06:33.277 "trtype": "pcie", 00:06:33.277 "traddr": "0000:00:10.0", 00:06:33.277 "name": "Nvme0" 00:06:33.277 }, 00:06:33.277 "method": "bdev_nvme_attach_controller" 00:06:33.277 }, 00:06:33.277 { 00:06:33.277 "method": "bdev_wait_for_examine" 00:06:33.277 } 00:06:33.277 ] 00:06:33.277 } 00:06:33.277 ] 00:06:33.277 } 00:06:33.536 [2024-12-10 14:11:58.120111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.536 [2024-12-10 14:11:58.147449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.536 [2024-12-10 14:11:58.174157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.536  [2024-12-10T14:11:58.632Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:33.795 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.795 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.795 [2024-12-10 14:11:58.438764] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:33.795 [2024-12-10 14:11:58.439467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60792 ] 00:06:33.795 { 00:06:33.795 "subsystems": [ 00:06:33.795 { 00:06:33.795 "subsystem": "bdev", 00:06:33.795 "config": [ 00:06:33.795 { 00:06:33.795 "params": { 00:06:33.795 "trtype": "pcie", 00:06:33.795 "traddr": "0000:00:10.0", 00:06:33.796 "name": "Nvme0" 00:06:33.796 }, 00:06:33.796 "method": "bdev_nvme_attach_controller" 00:06:33.796 }, 00:06:33.796 { 00:06:33.796 "method": "bdev_wait_for_examine" 00:06:33.796 } 00:06:33.796 ] 00:06:33.796 } 00:06:33.796 ] 00:06:33.796 } 00:06:33.796 [2024-12-10 14:11:58.583357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.796 [2024-12-10 14:11:58.611216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.055 [2024-12-10 14:11:58.639195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.055  [2024-12-10T14:11:58.892Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.055 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.055 14:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:34.623 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:34.623 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.623 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 [2024-12-10 14:11:59.393844] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:34.623 [2024-12-10 14:11:59.393927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60807 ] 00:06:34.623 { 00:06:34.623 "subsystems": [ 00:06:34.623 { 00:06:34.623 "subsystem": "bdev", 00:06:34.623 "config": [ 00:06:34.623 { 00:06:34.623 "params": { 00:06:34.623 "trtype": "pcie", 00:06:34.623 "traddr": "0000:00:10.0", 00:06:34.623 "name": "Nvme0" 00:06:34.623 }, 00:06:34.623 "method": "bdev_nvme_attach_controller" 00:06:34.623 }, 00:06:34.623 { 00:06:34.623 "method": "bdev_wait_for_examine" 00:06:34.623 } 00:06:34.623 ] 00:06:34.623 } 00:06:34.623 ] 00:06:34.623 } 00:06:34.882 [2024-12-10 14:11:59.538578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.882 [2024-12-10 14:11:59.565531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.882 [2024-12-10 14:11:59.592288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.882  [2024-12-10T14:11:59.978Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:35.141 00:06:35.141 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:35.141 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:35.141 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.141 14:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.141 [2024-12-10 14:11:59.852766] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:35.141 [2024-12-10 14:11:59.853043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60820 ] 00:06:35.142 { 00:06:35.142 "subsystems": [ 00:06:35.142 { 00:06:35.142 "subsystem": "bdev", 00:06:35.142 "config": [ 00:06:35.142 { 00:06:35.142 "params": { 00:06:35.142 "trtype": "pcie", 00:06:35.142 "traddr": "0000:00:10.0", 00:06:35.142 "name": "Nvme0" 00:06:35.142 }, 00:06:35.142 "method": "bdev_nvme_attach_controller" 00:06:35.142 }, 00:06:35.142 { 00:06:35.142 "method": "bdev_wait_for_examine" 00:06:35.142 } 00:06:35.142 ] 00:06:35.142 } 00:06:35.142 ] 00:06:35.142 } 00:06:35.401 [2024-12-10 14:12:00.000606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.401 [2024-12-10 14:12:00.033358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.401 [2024-12-10 14:12:00.063336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.401  [2024-12-10T14:12:00.497Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:35.660 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.660 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.660 [2024-12-10 14:12:00.333930] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:35.660 [2024-12-10 14:12:00.334033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60836 ] 00:06:35.660 { 00:06:35.660 "subsystems": [ 00:06:35.660 { 00:06:35.660 "subsystem": "bdev", 00:06:35.660 "config": [ 00:06:35.660 { 00:06:35.660 "params": { 00:06:35.660 "trtype": "pcie", 00:06:35.660 "traddr": "0000:00:10.0", 00:06:35.660 "name": "Nvme0" 00:06:35.660 }, 00:06:35.660 "method": "bdev_nvme_attach_controller" 00:06:35.660 }, 00:06:35.660 { 00:06:35.660 "method": "bdev_wait_for_examine" 00:06:35.660 } 00:06:35.660 ] 00:06:35.660 } 00:06:35.660 ] 00:06:35.660 } 00:06:35.661 [2024-12-10 14:12:00.482387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.919 [2024-12-10 14:12:00.514274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.919 [2024-12-10 14:12:00.541660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.919  [2024-12-10T14:12:00.756Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:35.919 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:36.177 14:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.436 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:36.436 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:36.436 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.436 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.695 [2024-12-10 14:12:01.288778] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:36.695 [2024-12-10 14:12:01.288873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60855 ] 00:06:36.695 { 00:06:36.695 "subsystems": [ 00:06:36.695 { 00:06:36.695 "subsystem": "bdev", 00:06:36.695 "config": [ 00:06:36.695 { 00:06:36.695 "params": { 00:06:36.695 "trtype": "pcie", 00:06:36.695 "traddr": "0000:00:10.0", 00:06:36.695 "name": "Nvme0" 00:06:36.695 }, 00:06:36.695 "method": "bdev_nvme_attach_controller" 00:06:36.695 }, 00:06:36.695 { 00:06:36.695 "method": "bdev_wait_for_examine" 00:06:36.695 } 00:06:36.695 ] 00:06:36.695 } 00:06:36.695 ] 00:06:36.695 } 00:06:36.695 [2024-12-10 14:12:01.435145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.695 [2024-12-10 14:12:01.463083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.695 [2024-12-10 14:12:01.490568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.954  [2024-12-10T14:12:01.791Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:36.954 00:06:36.954 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:36.954 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:36.954 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.954 14:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.954 [2024-12-10 14:12:01.751558] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:36.954 [2024-12-10 14:12:01.751675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:06:36.954 { 00:06:36.954 "subsystems": [ 00:06:36.954 { 00:06:36.954 "subsystem": "bdev", 00:06:36.954 "config": [ 00:06:36.954 { 00:06:36.954 "params": { 00:06:36.954 "trtype": "pcie", 00:06:36.954 "traddr": "0000:00:10.0", 00:06:36.954 "name": "Nvme0" 00:06:36.954 }, 00:06:36.954 "method": "bdev_nvme_attach_controller" 00:06:36.954 }, 00:06:36.954 { 00:06:36.954 "method": "bdev_wait_for_examine" 00:06:36.954 } 00:06:36.954 ] 00:06:36.954 } 00:06:36.954 ] 00:06:36.954 } 00:06:37.213 [2024-12-10 14:12:01.893575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.213 [2024-12-10 14:12:01.926633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.213 [2024-12-10 14:12:01.954506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.213  [2024-12-10T14:12:02.309Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:37.472 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.472 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.472 { 00:06:37.472 "subsystems": [ 00:06:37.472 { 00:06:37.472 "subsystem": "bdev", 00:06:37.472 "config": [ 00:06:37.472 { 00:06:37.472 "params": { 00:06:37.472 "trtype": "pcie", 00:06:37.472 "traddr": "0000:00:10.0", 00:06:37.472 "name": "Nvme0" 00:06:37.472 }, 00:06:37.472 "method": "bdev_nvme_attach_controller" 00:06:37.472 }, 00:06:37.472 { 00:06:37.472 "method": "bdev_wait_for_examine" 00:06:37.472 } 00:06:37.472 ] 00:06:37.472 } 00:06:37.472 ] 00:06:37.472 } 00:06:37.472 [2024-12-10 14:12:02.233280] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:37.472 [2024-12-10 14:12:02.233378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60884 ] 00:06:37.730 [2024-12-10 14:12:02.385279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.730 [2024-12-10 14:12:02.423694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.730 [2024-12-10 14:12:02.456784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.730  [2024-12-10T14:12:02.827Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:37.990 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.990 14:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.248 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:38.248 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:38.248 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.248 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.507 [2024-12-10 14:12:03.121573] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:38.507 [2024-12-10 14:12:03.121654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60903 ] 00:06:38.507 { 00:06:38.507 "subsystems": [ 00:06:38.507 { 00:06:38.507 "subsystem": "bdev", 00:06:38.507 "config": [ 00:06:38.507 { 00:06:38.507 "params": { 00:06:38.507 "trtype": "pcie", 00:06:38.507 "traddr": "0000:00:10.0", 00:06:38.507 "name": "Nvme0" 00:06:38.507 }, 00:06:38.507 "method": "bdev_nvme_attach_controller" 00:06:38.507 }, 00:06:38.507 { 00:06:38.507 "method": "bdev_wait_for_examine" 00:06:38.507 } 00:06:38.507 ] 00:06:38.507 } 00:06:38.507 ] 00:06:38.507 } 00:06:38.507 [2024-12-10 14:12:03.273873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.507 [2024-12-10 14:12:03.312015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.766 [2024-12-10 14:12:03.344978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.766  [2024-12-10T14:12:03.603Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:38.766 00:06:38.766 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:38.766 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.766 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.766 14:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.025 [2024-12-10 14:12:03.617136] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:39.025 [2024-12-10 14:12:03.617211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:06:39.025 { 00:06:39.025 "subsystems": [ 00:06:39.025 { 00:06:39.025 "subsystem": "bdev", 00:06:39.025 "config": [ 00:06:39.025 { 00:06:39.025 "params": { 00:06:39.025 "trtype": "pcie", 00:06:39.025 "traddr": "0000:00:10.0", 00:06:39.025 "name": "Nvme0" 00:06:39.025 }, 00:06:39.025 "method": "bdev_nvme_attach_controller" 00:06:39.025 }, 00:06:39.025 { 00:06:39.025 "method": "bdev_wait_for_examine" 00:06:39.025 } 00:06:39.025 ] 00:06:39.025 } 00:06:39.025 ] 00:06:39.025 } 00:06:39.025 [2024-12-10 14:12:03.759413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.025 [2024-12-10 14:12:03.786242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.025 [2024-12-10 14:12:03.812828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.284  [2024-12-10T14:12:04.121Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:39.284 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.284 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.284 [2024-12-10 14:12:04.094094] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:39.284 [2024-12-10 14:12:04.094730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60932 ] 00:06:39.284 { 00:06:39.284 "subsystems": [ 00:06:39.285 { 00:06:39.285 "subsystem": "bdev", 00:06:39.285 "config": [ 00:06:39.285 { 00:06:39.285 "params": { 00:06:39.285 "trtype": "pcie", 00:06:39.285 "traddr": "0000:00:10.0", 00:06:39.285 "name": "Nvme0" 00:06:39.285 }, 00:06:39.285 "method": "bdev_nvme_attach_controller" 00:06:39.285 }, 00:06:39.285 { 00:06:39.285 "method": "bdev_wait_for_examine" 00:06:39.285 } 00:06:39.285 ] 00:06:39.285 } 00:06:39.285 ] 00:06:39.285 } 00:06:39.544 [2024-12-10 14:12:04.239336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.544 [2024-12-10 14:12:04.266121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.544 [2024-12-10 14:12:04.292611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.803  [2024-12-10T14:12:04.640Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:39.803 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:39.803 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.374 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:40.374 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.374 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.374 14:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.374 [2024-12-10 14:12:04.958096] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:40.374 [2024-12-10 14:12:04.958629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60945 ] 00:06:40.374 { 00:06:40.374 "subsystems": [ 00:06:40.374 { 00:06:40.374 "subsystem": "bdev", 00:06:40.374 "config": [ 00:06:40.374 { 00:06:40.374 "params": { 00:06:40.374 "trtype": "pcie", 00:06:40.374 "traddr": "0000:00:10.0", 00:06:40.374 "name": "Nvme0" 00:06:40.374 }, 00:06:40.374 "method": "bdev_nvme_attach_controller" 00:06:40.374 }, 00:06:40.374 { 00:06:40.374 "method": "bdev_wait_for_examine" 00:06:40.374 } 00:06:40.374 ] 00:06:40.374 } 00:06:40.374 ] 00:06:40.374 } 00:06:40.374 [2024-12-10 14:12:05.103125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.374 [2024-12-10 14:12:05.129751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.374 [2024-12-10 14:12:05.158192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.634  [2024-12-10T14:12:05.471Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:40.634 00:06:40.634 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:40.634 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:40.634 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.634 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.634 [2024-12-10 14:12:05.423883] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:40.634 [2024-12-10 14:12:05.423991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60959 ] 00:06:40.634 { 00:06:40.634 "subsystems": [ 00:06:40.634 { 00:06:40.634 "subsystem": "bdev", 00:06:40.634 "config": [ 00:06:40.634 { 00:06:40.634 "params": { 00:06:40.634 "trtype": "pcie", 00:06:40.634 "traddr": "0000:00:10.0", 00:06:40.634 "name": "Nvme0" 00:06:40.634 }, 00:06:40.634 "method": "bdev_nvme_attach_controller" 00:06:40.634 }, 00:06:40.634 { 00:06:40.634 "method": "bdev_wait_for_examine" 00:06:40.634 } 00:06:40.634 ] 00:06:40.634 } 00:06:40.634 ] 00:06:40.634 } 00:06:40.892 [2024-12-10 14:12:05.566934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.893 [2024-12-10 14:12:05.593526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.893 [2024-12-10 14:12:05.620713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.893  [2024-12-10T14:12:05.991Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:41.154 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.154 14:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 [2024-12-10 14:12:05.886337] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:41.154 { 00:06:41.154 "subsystems": [ 00:06:41.154 { 00:06:41.154 "subsystem": "bdev", 00:06:41.154 "config": [ 00:06:41.154 { 00:06:41.154 "params": { 00:06:41.154 "trtype": "pcie", 00:06:41.154 "traddr": "0000:00:10.0", 00:06:41.154 "name": "Nvme0" 00:06:41.154 }, 00:06:41.154 "method": "bdev_nvme_attach_controller" 00:06:41.154 }, 00:06:41.154 { 00:06:41.154 "method": "bdev_wait_for_examine" 00:06:41.154 } 00:06:41.154 ] 00:06:41.154 } 00:06:41.154 ] 00:06:41.154 } 00:06:41.154 [2024-12-10 14:12:05.886470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60974 ] 00:06:41.437 [2024-12-10 14:12:06.032245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.437 [2024-12-10 14:12:06.059510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.437 [2024-12-10 14:12:06.085997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.437  [2024-12-10T14:12:06.545Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:41.708 00:06:41.708 00:06:41.708 real 0m11.251s 00:06:41.708 user 0m8.361s 00:06:41.708 sys 0m3.478s 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.708 ************************************ 00:06:41.708 END TEST dd_rw 00:06:41.708 ************************************ 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.708 ************************************ 00:06:41.708 START TEST dd_rw_offset 00:06:41.708 ************************************ 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:41.708 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:41.709 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=dqbvrhrbwzxzsw8ol3aj4myyl4frghoh1j3ptku2v7adcqzz02s1w9s33hcbjyzc2n4vkm2ij68ce1bq0or4x3onnpd8vg9b95p31kick2geb4mzans2f54iwpihwmpnuuqe06xtq6qnelz1cfm6juezrw9d0uoklobwkhogoio8dwas1mpxxpypxuunj2rpll6eu2ijxm5bekwxx6z16nkshjayzkdnqjf1zmc09iztwxzhz8bhd5fwgf1p82sbpu9j1hfz4y4pf1tuk87r8vaazxkfffds08a8p46hvqleh3fj4os0213p8h04a626ke27pbhhp07b1e965blnklqa35gl5x2o3ipabnhsyv140l13a5a2zi2fn87gyqo64fglt7e4ag56whsueeluuxqxi4esa1dh9byjgadraxk10r8a8m5ipipm7b4da8e8zt5xiineb07l6bmx9b9vph0djsk4jqvtlcdrdxmttihpy7dcqfeg6v3urhmnys8ig7x5kmpamdz9aoirxo4rwit07560n693kvjwf2l2vkv4lklzqg1r7v5pcce3dxayn53ow8rmc5hr3b1idbjv040bam981ihbfhndji1b28140xcvpztm2151mjzsv51wqwe4rgmjbu654z1syoa8fe05vdd81rcnusli1tzd37999ck9cph8x72k30s6agpw3fkke0ffftay600e5chg71zfkko0i2o5xnfuux0enp5jax5tlzufakzzcze6rxsfqgcp1cfuxahq75aw1bfpleqarwc2nya4odt530evbdk7cef65l7ebiw47p5h08pepd0jot58paweng5p3uke0mxqqtu5nprkaa22ykt0cci7sx6thp29flsyrkjnzel52lz038fp6sc1i9ot47i8btaiibtjy8510lzqwrk91451mj08r50cq1uemdyql6jzza44t0i2t0wv47oi72wma069n86euk9sfi8ghrv1v37x0ax6e5g6k6xy76rwl6j32qo61f0vdb645sl2z3g3dp725xcpvrenkj4pzbflz3ifwasekvt25bjif6jtb2ibqbjrk0nm7i7sth09m72ici3ye9q894pu4tuc06avy22t952exf0vlx247e0yqqpal4qrud5h972okwq0judpa60nak3ypogkxy7rb32xttufx1ki6ewv5dm92bljgsf49crthiszpondny881l0yktfqfb7qzhj0yzr245qvbgsmv5recfq762bwh32rv0bnecv9vw07wl7ajv2qtjtmc075efh2h0rdbtpkf1n54p2dolj0mgg9lm8ak4g5cvpgz0td1znpojf39z7ucf5lkygvfmvd0ppsmma475u4awbmsjxzx4nfb71fg6q12qgssu20c9pidnxxuffb7l95ygv07dn4flygqh11rdmw53mlkr56mp12fh0oy1c3vhg0yyniomxbv1ez2m085g37ul7yrrmbypbwxnhd0ts7fuph46ksb90ktpn1udzw7azjsgfl6fhtp3mtuvz45yl2wuq96ezhyi2fj3hrj7gdfp0zikjlygiz8gtkod970e70yzv50n5zlor4bnx7u3cf0s60au0ok166nqhmn2bmqo7glg9uh3r6s9qlpw9ia45vwzi7ibezaa61rydby9aiqevyvx6katrpyjtnpk1i9vlevb4ihtn2shbivlkck5o0xbc1qpwlb9m1zaxu0op0dkbcm0ysaxwwu68r296w511h6h1musb9xxl7fgei99me5h0c3d97jpv5o470exe6pja8i9wdlklux4svmmflss2dqxb3uvzjust79tkh5ct9fq4pl6o2qafsrklv9dmg443br3vnqo0jmt914x14ydz7c9u9kluyozctuy60tkz8cgjtf2zxdxd8qbujzc0pgkv5mf0q5hw48fehfz1zxszz01jbxao9alleg7s0gq2browcxl4ah5cetkkmfb7l7h5s3wmfahd1p4jkflfoimqt6vvjeijpmfyt64kcntpfdrgpsftmt1rlqmiifb29d58ffzyvcf3kpkv4yzxcyqlu9xc4s274oaff23iisvatrp8lkazhyd6385r5itpbn2whf9mujm36cdzc3sig883pc0v9y07imfcjtbxm9rdrkygkqac9cos0feehvwvt8xltsdh21utmpt990dkmltbxthv4iq06iinc6zs616epi3ao84ezh8e76w960y3r10kdo1rws26mbj1q5ricar2ldtmvtvzssxpm5eoy6d1dhvy9vgo5lhcej26q0abd7n2leptofyvlhf1nk7r2ae3bm0c0mzkd4yu09ffxhzun2kbraqez2dqli027isgoqg5xy55kxelqamqozsa20s89hjos8xg11dbo7uzvtyspmq92gc85zobeytwfrxgr77phju4n5ebmilgn1mwet6rrlshuu0y8xdad2fzb1yi5texifbnjn8vo1urnaj5vbz8pa21y7kl6nmdyfroluye2t8k66xy10rgl8ti7z6jtvobo6fem2nt4hxtvnho25r9p37pnjn6z48tla68wl4ql8w3ezwybclqx7rb3at64h93l3tku8n4nfkmij3hdrzayrdlu64vtp7q2rdz48054khwkehathxaelgp2f17pdh99jaem3g675psowau43ep2yj313dt11ex0gnuvvln0tv6cj5yzfkceo4889sn0o3ejy032rmtoov5xv8e24gu9dn0uvczf000kflkfwte3o88egeaf1jeji4qpozadx327nrj6akpv6s2psgp9jr322g0jj1nj1x76u07xvmptzvcbugbqvttu738u1sd1n1tg088jexqt18dhhqah5idwblovfedcumkz8l2xo55ox056qvmi3k113tm5on2q6oka6ngq8m0kty1srjyoywcc6yma75gccmuaklq58pkdb2x5maea0rhmzysvn72gs6pkfu33yh07rkn4e3jq7ixfhwtaqginmr2t195cz6qsxgulzij761m15gu1fheezh60nsj1w3pidr82c9712ygbhepmc419iv5mpukasroubiot6wyb82cb4rwvtv6qunicpgii3uxq6by8y4j4glhxrc2xvn33i32cg52wp0k82315u3udwjwl99q23kuv4tt8q42sd8ncb7yqoqp0abtvvdmaj6gxpsn6y1ckgmpobbu1pnwuyb3a81iz54bsvumtgkusylx98y3v7cjqkstb97g38qe0mmr1f5isnytsbt441rih7agazv7crhexse4zck78muyqw2qi6yr2frk0s6q4s88zgq8opg2vg1wa2wku6ftlo4ipwsvbv06x3h0uxfnqio8d0e2660e204vc1a3zi6y3qo1bq4rpcvlj1w5x7poey87x1oq4qjzw655z0pbwq022xs3sh4f6d9yfjrn58e6wghwucetbl7gsg3ffnrfknmv2452xcch5k9xh3diygsr9tjjq16yfxrvu4w3h9sbxbovy00va99auvkjg1n7mtf228yi6xljqki9qmfkg6n681dw0pizijth1pa9v2ic320s2lngu077i1arp2ium0715buuw2pt26cpbpc87qoo4hgddbgsczphqn9ndn9bzl387moeyjwnfijctwwlf1qbxt8ar3ds7sbx5kjwzmwsdyrwjbk1564v2ykhz4qunjv8lnqzopz11ztkhr6d7uhityuqgojhialhu7yytx81me23ltfq117h5m7ss1rl2ki4ty0s2jh4yu3luswrieo1uee6cb7bsftwz7ex4jvoo1afyc8r207ykor7jxpxpbccfjfnsj8hpmhu60yswuxrpsf8135gnwu67kr3jcrl8p48m6d13xec8gq1q29gumdja6l7q0z5x4su8wmxaiv16yk54usczmktibfiy4uhchhxyflruuje230ryvg4m31vm9qvk6wa3juo2ab1fg0etgzw76y0kffkwkwnnf6ef1535cyw05xgwv0w9cb5dkfftr0mc590zrjcnaz00415yxkgvcuuo 00:06:41.709 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:41.709 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:41.709 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:41.709 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 [2024-12-10 14:12:06.453324] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:41.709 [2024-12-10 14:12:06.453426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:06:41.709 { 00:06:41.709 "subsystems": [ 00:06:41.709 { 00:06:41.709 "subsystem": "bdev", 00:06:41.709 "config": [ 00:06:41.709 { 00:06:41.709 "params": { 00:06:41.709 "trtype": "pcie", 00:06:41.709 "traddr": "0000:00:10.0", 00:06:41.709 "name": "Nvme0" 00:06:41.709 }, 00:06:41.709 "method": "bdev_nvme_attach_controller" 00:06:41.709 }, 00:06:41.709 { 00:06:41.709 "method": "bdev_wait_for_examine" 00:06:41.709 } 00:06:41.709 ] 00:06:41.709 } 00:06:41.709 ] 00:06:41.709 } 00:06:41.968 [2024-12-10 14:12:06.602685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.968 [2024-12-10 14:12:06.629260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.968 [2024-12-10 14:12:06.656120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.968  [2024-12-10T14:12:07.063Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.226 00:06:42.226 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:42.226 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:42.226 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.226 14:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.226 [2024-12-10 14:12:06.930181] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:42.226 [2024-12-10 14:12:06.930275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61018 ] 00:06:42.226 { 00:06:42.226 "subsystems": [ 00:06:42.226 { 00:06:42.226 "subsystem": "bdev", 00:06:42.226 "config": [ 00:06:42.226 { 00:06:42.226 "params": { 00:06:42.226 "trtype": "pcie", 00:06:42.226 "traddr": "0000:00:10.0", 00:06:42.226 "name": "Nvme0" 00:06:42.226 }, 00:06:42.226 "method": "bdev_nvme_attach_controller" 00:06:42.226 }, 00:06:42.226 { 00:06:42.226 "method": "bdev_wait_for_examine" 00:06:42.226 } 00:06:42.226 ] 00:06:42.226 } 00:06:42.226 ] 00:06:42.226 } 00:06:42.486 [2024-12-10 14:12:07.075741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.486 [2024-12-10 14:12:07.103910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.486 [2024-12-10 14:12:07.130833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.486  [2024-12-10T14:12:07.582Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.745 00:06:42.745 14:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ dqbvrhrbwzxzsw8ol3aj4myyl4frghoh1j3ptku2v7adcqzz02s1w9s33hcbjyzc2n4vkm2ij68ce1bq0or4x3onnpd8vg9b95p31kick2geb4mzans2f54iwpihwmpnuuqe06xtq6qnelz1cfm6juezrw9d0uoklobwkhogoio8dwas1mpxxpypxuunj2rpll6eu2ijxm5bekwxx6z16nkshjayzkdnqjf1zmc09iztwxzhz8bhd5fwgf1p82sbpu9j1hfz4y4pf1tuk87r8vaazxkfffds08a8p46hvqleh3fj4os0213p8h04a626ke27pbhhp07b1e965blnklqa35gl5x2o3ipabnhsyv140l13a5a2zi2fn87gyqo64fglt7e4ag56whsueeluuxqxi4esa1dh9byjgadraxk10r8a8m5ipipm7b4da8e8zt5xiineb07l6bmx9b9vph0djsk4jqvtlcdrdxmttihpy7dcqfeg6v3urhmnys8ig7x5kmpamdz9aoirxo4rwit07560n693kvjwf2l2vkv4lklzqg1r7v5pcce3dxayn53ow8rmc5hr3b1idbjv040bam981ihbfhndji1b28140xcvpztm2151mjzsv51wqwe4rgmjbu654z1syoa8fe05vdd81rcnusli1tzd37999ck9cph8x72k30s6agpw3fkke0ffftay600e5chg71zfkko0i2o5xnfuux0enp5jax5tlzufakzzcze6rxsfqgcp1cfuxahq75aw1bfpleqarwc2nya4odt530evbdk7cef65l7ebiw47p5h08pepd0jot58paweng5p3uke0mxqqtu5nprkaa22ykt0cci7sx6thp29flsyrkjnzel52lz038fp6sc1i9ot47i8btaiibtjy8510lzqwrk91451mj08r50cq1uemdyql6jzza44t0i2t0wv47oi72wma069n86euk9sfi8ghrv1v37x0ax6e5g6k6xy76rwl6j32qo61f0vdb645sl2z3g3dp725xcpvrenkj4pzbflz3ifwasekvt25bjif6jtb2ibqbjrk0nm7i7sth09m72ici3ye9q894pu4tuc06avy22t952exf0vlx247e0yqqpal4qrud5h972okwq0judpa60nak3ypogkxy7rb32xttufx1ki6ewv5dm92bljgsf49crthiszpondny881l0yktfqfb7qzhj0yzr245qvbgsmv5recfq762bwh32rv0bnecv9vw07wl7ajv2qtjtmc075efh2h0rdbtpkf1n54p2dolj0mgg9lm8ak4g5cvpgz0td1znpojf39z7ucf5lkygvfmvd0ppsmma475u4awbmsjxzx4nfb71fg6q12qgssu20c9pidnxxuffb7l95ygv07dn4flygqh11rdmw53mlkr56mp12fh0oy1c3vhg0yyniomxbv1ez2m085g37ul7yrrmbypbwxnhd0ts7fuph46ksb90ktpn1udzw7azjsgfl6fhtp3mtuvz45yl2wuq96ezhyi2fj3hrj7gdfp0zikjlygiz8gtkod970e70yzv50n5zlor4bnx7u3cf0s60au0ok166nqhmn2bmqo7glg9uh3r6s9qlpw9ia45vwzi7ibezaa61rydby9aiqevyvx6katrpyjtnpk1i9vlevb4ihtn2shbivlkck5o0xbc1qpwlb9m1zaxu0op0dkbcm0ysaxwwu68r296w511h6h1musb9xxl7fgei99me5h0c3d97jpv5o470exe6pja8i9wdlklux4svmmflss2dqxb3uvzjust79tkh5ct9fq4pl6o2qafsrklv9dmg443br3vnqo0jmt914x14ydz7c9u9kluyozctuy60tkz8cgjtf2zxdxd8qbujzc0pgkv5mf0q5hw48fehfz1zxszz01jbxao9alleg7s0gq2browcxl4ah5cetkkmfb7l7h5s3wmfahd1p4jkflfoimqt6vvjeijpmfyt64kcntpfdrgpsftmt1rlqmiifb29d58ffzyvcf3kpkv4yzxcyqlu9xc4s274oaff23iisvatrp8lkazhyd6385r5itpbn2whf9mujm36cdzc3sig883pc0v9y07imfcjtbxm9rdrkygkqac9cos0feehvwvt8xltsdh21utmpt990dkmltbxthv4iq06iinc6zs616epi3ao84ezh8e76w960y3r10kdo1rws26mbj1q5ricar2ldtmvtvzssxpm5eoy6d1dhvy9vgo5lhcej26q0abd7n2leptofyvlhf1nk7r2ae3bm0c0mzkd4yu09ffxhzun2kbraqez2dqli027isgoqg5xy55kxelqamqozsa20s89hjos8xg11dbo7uzvtyspmq92gc85zobeytwfrxgr77phju4n5ebmilgn1mwet6rrlshuu0y8xdad2fzb1yi5texifbnjn8vo1urnaj5vbz8pa21y7kl6nmdyfroluye2t8k66xy10rgl8ti7z6jtvobo6fem2nt4hxtvnho25r9p37pnjn6z48tla68wl4ql8w3ezwybclqx7rb3at64h93l3tku8n4nfkmij3hdrzayrdlu64vtp7q2rdz48054khwkehathxaelgp2f17pdh99jaem3g675psowau43ep2yj313dt11ex0gnuvvln0tv6cj5yzfkceo4889sn0o3ejy032rmtoov5xv8e24gu9dn0uvczf000kflkfwte3o88egeaf1jeji4qpozadx327nrj6akpv6s2psgp9jr322g0jj1nj1x76u07xvmptzvcbugbqvttu738u1sd1n1tg088jexqt18dhhqah5idwblovfedcumkz8l2xo55ox056qvmi3k113tm5on2q6oka6ngq8m0kty1srjyoywcc6yma75gccmuaklq58pkdb2x5maea0rhmzysvn72gs6pkfu33yh07rkn4e3jq7ixfhwtaqginmr2t195cz6qsxgulzij761m15gu1fheezh60nsj1w3pidr82c9712ygbhepmc419iv5mpukasroubiot6wyb82cb4rwvtv6qunicpgii3uxq6by8y4j4glhxrc2xvn33i32cg52wp0k82315u3udwjwl99q23kuv4tt8q42sd8ncb7yqoqp0abtvvdmaj6gxpsn6y1ckgmpobbu1pnwuyb3a81iz54bsvumtgkusylx98y3v7cjqkstb97g38qe0mmr1f5isnytsbt441rih7agazv7crhexse4zck78muyqw2qi6yr2frk0s6q4s88zgq8opg2vg1wa2wku6ftlo4ipwsvbv06x3h0uxfnqio8d0e2660e204vc1a3zi6y3qo1bq4rpcvlj1w5x7poey87x1oq4qjzw655z0pbwq022xs3sh4f6d9yfjrn58e6wghwucetbl7gsg3ffnrfknmv2452xcch5k9xh3diygsr9tjjq16yfxrvu4w3h9sbxbovy00va99auvkjg1n7mtf228yi6xljqki9qmfkg6n681dw0pizijth1pa9v2ic320s2lngu077i1arp2ium0715buuw2pt26cpbpc87qoo4hgddbgsczphqn9ndn9bzl387moeyjwnfijctwwlf1qbxt8ar3ds7sbx5kjwzmwsdyrwjbk1564v2ykhz4qunjv8lnqzopz11ztkhr6d7uhityuqgojhialhu7yytx81me23ltfq117h5m7ss1rl2ki4ty0s2jh4yu3luswrieo1uee6cb7bsftwz7ex4jvoo1afyc8r207ykor7jxpxpbccfjfnsj8hpmhu60yswuxrpsf8135gnwu67kr3jcrl8p48m6d13xec8gq1q29gumdja6l7q0z5x4su8wmxaiv16yk54usczmktibfiy4uhchhxyflruuje230ryvg4m31vm9qvk6wa3juo2ab1fg0etgzw76y0kffkwkwnnf6ef1535cyw05xgwv0w9cb5dkfftr0mc590zrjcnaz00415yxkgvcuuo == \d\q\b\v\r\h\r\b\w\z\x\z\s\w\8\o\l\3\a\j\4\m\y\y\l\4\f\r\g\h\o\h\1\j\3\p\t\k\u\2\v\7\a\d\c\q\z\z\0\2\s\1\w\9\s\3\3\h\c\b\j\y\z\c\2\n\4\v\k\m\2\i\j\6\8\c\e\1\b\q\0\o\r\4\x\3\o\n\n\p\d\8\v\g\9\b\9\5\p\3\1\k\i\c\k\2\g\e\b\4\m\z\a\n\s\2\f\5\4\i\w\p\i\h\w\m\p\n\u\u\q\e\0\6\x\t\q\6\q\n\e\l\z\1\c\f\m\6\j\u\e\z\r\w\9\d\0\u\o\k\l\o\b\w\k\h\o\g\o\i\o\8\d\w\a\s\1\m\p\x\x\p\y\p\x\u\u\n\j\2\r\p\l\l\6\e\u\2\i\j\x\m\5\b\e\k\w\x\x\6\z\1\6\n\k\s\h\j\a\y\z\k\d\n\q\j\f\1\z\m\c\0\9\i\z\t\w\x\z\h\z\8\b\h\d\5\f\w\g\f\1\p\8\2\s\b\p\u\9\j\1\h\f\z\4\y\4\p\f\1\t\u\k\8\7\r\8\v\a\a\z\x\k\f\f\f\d\s\0\8\a\8\p\4\6\h\v\q\l\e\h\3\f\j\4\o\s\0\2\1\3\p\8\h\0\4\a\6\2\6\k\e\2\7\p\b\h\h\p\0\7\b\1\e\9\6\5\b\l\n\k\l\q\a\3\5\g\l\5\x\2\o\3\i\p\a\b\n\h\s\y\v\1\4\0\l\1\3\a\5\a\2\z\i\2\f\n\8\7\g\y\q\o\6\4\f\g\l\t\7\e\4\a\g\5\6\w\h\s\u\e\e\l\u\u\x\q\x\i\4\e\s\a\1\d\h\9\b\y\j\g\a\d\r\a\x\k\1\0\r\8\a\8\m\5\i\p\i\p\m\7\b\4\d\a\8\e\8\z\t\5\x\i\i\n\e\b\0\7\l\6\b\m\x\9\b\9\v\p\h\0\d\j\s\k\4\j\q\v\t\l\c\d\r\d\x\m\t\t\i\h\p\y\7\d\c\q\f\e\g\6\v\3\u\r\h\m\n\y\s\8\i\g\7\x\5\k\m\p\a\m\d\z\9\a\o\i\r\x\o\4\r\w\i\t\0\7\5\6\0\n\6\9\3\k\v\j\w\f\2\l\2\v\k\v\4\l\k\l\z\q\g\1\r\7\v\5\p\c\c\e\3\d\x\a\y\n\5\3\o\w\8\r\m\c\5\h\r\3\b\1\i\d\b\j\v\0\4\0\b\a\m\9\8\1\i\h\b\f\h\n\d\j\i\1\b\2\8\1\4\0\x\c\v\p\z\t\m\2\1\5\1\m\j\z\s\v\5\1\w\q\w\e\4\r\g\m\j\b\u\6\5\4\z\1\s\y\o\a\8\f\e\0\5\v\d\d\8\1\r\c\n\u\s\l\i\1\t\z\d\3\7\9\9\9\c\k\9\c\p\h\8\x\7\2\k\3\0\s\6\a\g\p\w\3\f\k\k\e\0\f\f\f\t\a\y\6\0\0\e\5\c\h\g\7\1\z\f\k\k\o\0\i\2\o\5\x\n\f\u\u\x\0\e\n\p\5\j\a\x\5\t\l\z\u\f\a\k\z\z\c\z\e\6\r\x\s\f\q\g\c\p\1\c\f\u\x\a\h\q\7\5\a\w\1\b\f\p\l\e\q\a\r\w\c\2\n\y\a\4\o\d\t\5\3\0\e\v\b\d\k\7\c\e\f\6\5\l\7\e\b\i\w\4\7\p\5\h\0\8\p\e\p\d\0\j\o\t\5\8\p\a\w\e\n\g\5\p\3\u\k\e\0\m\x\q\q\t\u\5\n\p\r\k\a\a\2\2\y\k\t\0\c\c\i\7\s\x\6\t\h\p\2\9\f\l\s\y\r\k\j\n\z\e\l\5\2\l\z\0\3\8\f\p\6\s\c\1\i\9\o\t\4\7\i\8\b\t\a\i\i\b\t\j\y\8\5\1\0\l\z\q\w\r\k\9\1\4\5\1\m\j\0\8\r\5\0\c\q\1\u\e\m\d\y\q\l\6\j\z\z\a\4\4\t\0\i\2\t\0\w\v\4\7\o\i\7\2\w\m\a\0\6\9\n\8\6\e\u\k\9\s\f\i\8\g\h\r\v\1\v\3\7\x\0\a\x\6\e\5\g\6\k\6\x\y\7\6\r\w\l\6\j\3\2\q\o\6\1\f\0\v\d\b\6\4\5\s\l\2\z\3\g\3\d\p\7\2\5\x\c\p\v\r\e\n\k\j\4\p\z\b\f\l\z\3\i\f\w\a\s\e\k\v\t\2\5\b\j\i\f\6\j\t\b\2\i\b\q\b\j\r\k\0\n\m\7\i\7\s\t\h\0\9\m\7\2\i\c\i\3\y\e\9\q\8\9\4\p\u\4\t\u\c\0\6\a\v\y\2\2\t\9\5\2\e\x\f\0\v\l\x\2\4\7\e\0\y\q\q\p\a\l\4\q\r\u\d\5\h\9\7\2\o\k\w\q\0\j\u\d\p\a\6\0\n\a\k\3\y\p\o\g\k\x\y\7\r\b\3\2\x\t\t\u\f\x\1\k\i\6\e\w\v\5\d\m\9\2\b\l\j\g\s\f\4\9\c\r\t\h\i\s\z\p\o\n\d\n\y\8\8\1\l\0\y\k\t\f\q\f\b\7\q\z\h\j\0\y\z\r\2\4\5\q\v\b\g\s\m\v\5\r\e\c\f\q\7\6\2\b\w\h\3\2\r\v\0\b\n\e\c\v\9\v\w\0\7\w\l\7\a\j\v\2\q\t\j\t\m\c\0\7\5\e\f\h\2\h\0\r\d\b\t\p\k\f\1\n\5\4\p\2\d\o\l\j\0\m\g\g\9\l\m\8\a\k\4\g\5\c\v\p\g\z\0\t\d\1\z\n\p\o\j\f\3\9\z\7\u\c\f\5\l\k\y\g\v\f\m\v\d\0\p\p\s\m\m\a\4\7\5\u\4\a\w\b\m\s\j\x\z\x\4\n\f\b\7\1\f\g\6\q\1\2\q\g\s\s\u\2\0\c\9\p\i\d\n\x\x\u\f\f\b\7\l\9\5\y\g\v\0\7\d\n\4\f\l\y\g\q\h\1\1\r\d\m\w\5\3\m\l\k\r\5\6\m\p\1\2\f\h\0\o\y\1\c\3\v\h\g\0\y\y\n\i\o\m\x\b\v\1\e\z\2\m\0\8\5\g\3\7\u\l\7\y\r\r\m\b\y\p\b\w\x\n\h\d\0\t\s\7\f\u\p\h\4\6\k\s\b\9\0\k\t\p\n\1\u\d\z\w\7\a\z\j\s\g\f\l\6\f\h\t\p\3\m\t\u\v\z\4\5\y\l\2\w\u\q\9\6\e\z\h\y\i\2\f\j\3\h\r\j\7\g\d\f\p\0\z\i\k\j\l\y\g\i\z\8\g\t\k\o\d\9\7\0\e\7\0\y\z\v\5\0\n\5\z\l\o\r\4\b\n\x\7\u\3\c\f\0\s\6\0\a\u\0\o\k\1\6\6\n\q\h\m\n\2\b\m\q\o\7\g\l\g\9\u\h\3\r\6\s\9\q\l\p\w\9\i\a\4\5\v\w\z\i\7\i\b\e\z\a\a\6\1\r\y\d\b\y\9\a\i\q\e\v\y\v\x\6\k\a\t\r\p\y\j\t\n\p\k\1\i\9\v\l\e\v\b\4\i\h\t\n\2\s\h\b\i\v\l\k\c\k\5\o\0\x\b\c\1\q\p\w\l\b\9\m\1\z\a\x\u\0\o\p\0\d\k\b\c\m\0\y\s\a\x\w\w\u\6\8\r\2\9\6\w\5\1\1\h\6\h\1\m\u\s\b\9\x\x\l\7\f\g\e\i\9\9\m\e\5\h\0\c\3\d\9\7\j\p\v\5\o\4\7\0\e\x\e\6\p\j\a\8\i\9\w\d\l\k\l\u\x\4\s\v\m\m\f\l\s\s\2\d\q\x\b\3\u\v\z\j\u\s\t\7\9\t\k\h\5\c\t\9\f\q\4\p\l\6\o\2\q\a\f\s\r\k\l\v\9\d\m\g\4\4\3\b\r\3\v\n\q\o\0\j\m\t\9\1\4\x\1\4\y\d\z\7\c\9\u\9\k\l\u\y\o\z\c\t\u\y\6\0\t\k\z\8\c\g\j\t\f\2\z\x\d\x\d\8\q\b\u\j\z\c\0\p\g\k\v\5\m\f\0\q\5\h\w\4\8\f\e\h\f\z\1\z\x\s\z\z\0\1\j\b\x\a\o\9\a\l\l\e\g\7\s\0\g\q\2\b\r\o\w\c\x\l\4\a\h\5\c\e\t\k\k\m\f\b\7\l\7\h\5\s\3\w\m\f\a\h\d\1\p\4\j\k\f\l\f\o\i\m\q\t\6\v\v\j\e\i\j\p\m\f\y\t\6\4\k\c\n\t\p\f\d\r\g\p\s\f\t\m\t\1\r\l\q\m\i\i\f\b\2\9\d\5\8\f\f\z\y\v\c\f\3\k\p\k\v\4\y\z\x\c\y\q\l\u\9\x\c\4\s\2\7\4\o\a\f\f\2\3\i\i\s\v\a\t\r\p\8\l\k\a\z\h\y\d\6\3\8\5\r\5\i\t\p\b\n\2\w\h\f\9\m\u\j\m\3\6\c\d\z\c\3\s\i\g\8\8\3\p\c\0\v\9\y\0\7\i\m\f\c\j\t\b\x\m\9\r\d\r\k\y\g\k\q\a\c\9\c\o\s\0\f\e\e\h\v\w\v\t\8\x\l\t\s\d\h\2\1\u\t\m\p\t\9\9\0\d\k\m\l\t\b\x\t\h\v\4\i\q\0\6\i\i\n\c\6\z\s\6\1\6\e\p\i\3\a\o\8\4\e\z\h\8\e\7\6\w\9\6\0\y\3\r\1\0\k\d\o\1\r\w\s\2\6\m\b\j\1\q\5\r\i\c\a\r\2\l\d\t\m\v\t\v\z\s\s\x\p\m\5\e\o\y\6\d\1\d\h\v\y\9\v\g\o\5\l\h\c\e\j\2\6\q\0\a\b\d\7\n\2\l\e\p\t\o\f\y\v\l\h\f\1\n\k\7\r\2\a\e\3\b\m\0\c\0\m\z\k\d\4\y\u\0\9\f\f\x\h\z\u\n\2\k\b\r\a\q\e\z\2\d\q\l\i\0\2\7\i\s\g\o\q\g\5\x\y\5\5\k\x\e\l\q\a\m\q\o\z\s\a\2\0\s\8\9\h\j\o\s\8\x\g\1\1\d\b\o\7\u\z\v\t\y\s\p\m\q\9\2\g\c\8\5\z\o\b\e\y\t\w\f\r\x\g\r\7\7\p\h\j\u\4\n\5\e\b\m\i\l\g\n\1\m\w\e\t\6\r\r\l\s\h\u\u\0\y\8\x\d\a\d\2\f\z\b\1\y\i\5\t\e\x\i\f\b\n\j\n\8\v\o\1\u\r\n\a\j\5\v\b\z\8\p\a\2\1\y\7\k\l\6\n\m\d\y\f\r\o\l\u\y\e\2\t\8\k\6\6\x\y\1\0\r\g\l\8\t\i\7\z\6\j\t\v\o\b\o\6\f\e\m\2\n\t\4\h\x\t\v\n\h\o\2\5\r\9\p\3\7\p\n\j\n\6\z\4\8\t\l\a\6\8\w\l\4\q\l\8\w\3\e\z\w\y\b\c\l\q\x\7\r\b\3\a\t\6\4\h\9\3\l\3\t\k\u\8\n\4\n\f\k\m\i\j\3\h\d\r\z\a\y\r\d\l\u\6\4\v\t\p\7\q\2\r\d\z\4\8\0\5\4\k\h\w\k\e\h\a\t\h\x\a\e\l\g\p\2\f\1\7\p\d\h\9\9\j\a\e\m\3\g\6\7\5\p\s\o\w\a\u\4\3\e\p\2\y\j\3\1\3\d\t\1\1\e\x\0\g\n\u\v\v\l\n\0\t\v\6\c\j\5\y\z\f\k\c\e\o\4\8\8\9\s\n\0\o\3\e\j\y\0\3\2\r\m\t\o\o\v\5\x\v\8\e\2\4\g\u\9\d\n\0\u\v\c\z\f\0\0\0\k\f\l\k\f\w\t\e\3\o\8\8\e\g\e\a\f\1\j\e\j\i\4\q\p\o\z\a\d\x\3\2\7\n\r\j\6\a\k\p\v\6\s\2\p\s\g\p\9\j\r\3\2\2\g\0\j\j\1\n\j\1\x\7\6\u\0\7\x\v\m\p\t\z\v\c\b\u\g\b\q\v\t\t\u\7\3\8\u\1\s\d\1\n\1\t\g\0\8\8\j\e\x\q\t\1\8\d\h\h\q\a\h\5\i\d\w\b\l\o\v\f\e\d\c\u\m\k\z\8\l\2\x\o\5\5\o\x\0\5\6\q\v\m\i\3\k\1\1\3\t\m\5\o\n\2\q\6\o\k\a\6\n\g\q\8\m\0\k\t\y\1\s\r\j\y\o\y\w\c\c\6\y\m\a\7\5\g\c\c\m\u\a\k\l\q\5\8\p\k\d\b\2\x\5\m\a\e\a\0\r\h\m\z\y\s\v\n\7\2\g\s\6\p\k\f\u\3\3\y\h\0\7\r\k\n\4\e\3\j\q\7\i\x\f\h\w\t\a\q\g\i\n\m\r\2\t\1\9\5\c\z\6\q\s\x\g\u\l\z\i\j\7\6\1\m\1\5\g\u\1\f\h\e\e\z\h\6\0\n\s\j\1\w\3\p\i\d\r\8\2\c\9\7\1\2\y\g\b\h\e\p\m\c\4\1\9\i\v\5\m\p\u\k\a\s\r\o\u\b\i\o\t\6\w\y\b\8\2\c\b\4\r\w\v\t\v\6\q\u\n\i\c\p\g\i\i\3\u\x\q\6\b\y\8\y\4\j\4\g\l\h\x\r\c\2\x\v\n\3\3\i\3\2\c\g\5\2\w\p\0\k\8\2\3\1\5\u\3\u\d\w\j\w\l\9\9\q\2\3\k\u\v\4\t\t\8\q\4\2\s\d\8\n\c\b\7\y\q\o\q\p\0\a\b\t\v\v\d\m\a\j\6\g\x\p\s\n\6\y\1\c\k\g\m\p\o\b\b\u\1\p\n\w\u\y\b\3\a\8\1\i\z\5\4\b\s\v\u\m\t\g\k\u\s\y\l\x\9\8\y\3\v\7\c\j\q\k\s\t\b\9\7\g\3\8\q\e\0\m\m\r\1\f\5\i\s\n\y\t\s\b\t\4\4\1\r\i\h\7\a\g\a\z\v\7\c\r\h\e\x\s\e\4\z\c\k\7\8\m\u\y\q\w\2\q\i\6\y\r\2\f\r\k\0\s\6\q\4\s\8\8\z\g\q\8\o\p\g\2\v\g\1\w\a\2\w\k\u\6\f\t\l\o\4\i\p\w\s\v\b\v\0\6\x\3\h\0\u\x\f\n\q\i\o\8\d\0\e\2\6\6\0\e\2\0\4\v\c\1\a\3\z\i\6\y\3\q\o\1\b\q\4\r\p\c\v\l\j\1\w\5\x\7\p\o\e\y\8\7\x\1\o\q\4\q\j\z\w\6\5\5\z\0\p\b\w\q\0\2\2\x\s\3\s\h\4\f\6\d\9\y\f\j\r\n\5\8\e\6\w\g\h\w\u\c\e\t\b\l\7\g\s\g\3\f\f\n\r\f\k\n\m\v\2\4\5\2\x\c\c\h\5\k\9\x\h\3\d\i\y\g\s\r\9\t\j\j\q\1\6\y\f\x\r\v\u\4\w\3\h\9\s\b\x\b\o\v\y\0\0\v\a\9\9\a\u\v\k\j\g\1\n\7\m\t\f\2\2\8\y\i\6\x\l\j\q\k\i\9\q\m\f\k\g\6\n\6\8\1\d\w\0\p\i\z\i\j\t\h\1\p\a\9\v\2\i\c\3\2\0\s\2\l\n\g\u\0\7\7\i\1\a\r\p\2\i\u\m\0\7\1\5\b\u\u\w\2\p\t\2\6\c\p\b\p\c\8\7\q\o\o\4\h\g\d\d\b\g\s\c\z\p\h\q\n\9\n\d\n\9\b\z\l\3\8\7\m\o\e\y\j\w\n\f\i\j\c\t\w\w\l\f\1\q\b\x\t\8\a\r\3\d\s\7\s\b\x\5\k\j\w\z\m\w\s\d\y\r\w\j\b\k\1\5\6\4\v\2\y\k\h\z\4\q\u\n\j\v\8\l\n\q\z\o\p\z\1\1\z\t\k\h\r\6\d\7\u\h\i\t\y\u\q\g\o\j\h\i\a\l\h\u\7\y\y\t\x\8\1\m\e\2\3\l\t\f\q\1\1\7\h\5\m\7\s\s\1\r\l\2\k\i\4\t\y\0\s\2\j\h\4\y\u\3\l\u\s\w\r\i\e\o\1\u\e\e\6\c\b\7\b\s\f\t\w\z\7\e\x\4\j\v\o\o\1\a\f\y\c\8\r\2\0\7\y\k\o\r\7\j\x\p\x\p\b\c\c\f\j\f\n\s\j\8\h\p\m\h\u\6\0\y\s\w\u\x\r\p\s\f\8\1\3\5\g\n\w\u\6\7\k\r\3\j\c\r\l\8\p\4\8\m\6\d\1\3\x\e\c\8\g\q\1\q\2\9\g\u\m\d\j\a\6\l\7\q\0\z\5\x\4\s\u\8\w\m\x\a\i\v\1\6\y\k\5\4\u\s\c\z\m\k\t\i\b\f\i\y\4\u\h\c\h\h\x\y\f\l\r\u\u\j\e\2\3\0\r\y\v\g\4\m\3\1\v\m\9\q\v\k\6\w\a\3\j\u\o\2\a\b\1\f\g\0\e\t\g\z\w\7\6\y\0\k\f\f\k\w\k\w\n\n\f\6\e\f\1\5\3\5\c\y\w\0\5\x\g\w\v\0\w\9\c\b\5\d\k\f\f\t\r\0\m\c\5\9\0\z\r\j\c\n\a\z\0\0\4\1\5\y\x\k\g\v\c\u\u\o ]] 00:06:42.746 ************************************ 00:06:42.746 END TEST dd_rw_offset 00:06:42.746 ************************************ 00:06:42.746 00:06:42.746 real 0m0.982s 00:06:42.746 user 0m0.685s 00:06:42.746 sys 0m0.380s 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.746 14:12:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.746 [2024-12-10 14:12:07.438010] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:42.746 [2024-12-10 14:12:07.438107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61048 ] 00:06:42.746 { 00:06:42.746 "subsystems": [ 00:06:42.746 { 00:06:42.746 "subsystem": "bdev", 00:06:42.746 "config": [ 00:06:42.746 { 00:06:42.746 "params": { 00:06:42.746 "trtype": "pcie", 00:06:42.746 "traddr": "0000:00:10.0", 00:06:42.746 "name": "Nvme0" 00:06:42.746 }, 00:06:42.746 "method": "bdev_nvme_attach_controller" 00:06:42.746 }, 00:06:42.746 { 00:06:42.746 "method": "bdev_wait_for_examine" 00:06:42.746 } 00:06:42.746 ] 00:06:42.746 } 00:06:42.746 ] 00:06:42.746 } 00:06:43.005 [2024-12-10 14:12:07.589854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.005 [2024-12-10 14:12:07.628043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.005 [2024-12-10 14:12:07.661145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.005  [2024-12-10T14:12:08.101Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:43.264 00:06:43.264 14:12:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.264 00:06:43.264 real 0m13.868s 00:06:43.264 user 0m10.030s 00:06:43.264 sys 0m4.398s 00:06:43.264 14:12:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.264 14:12:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.264 ************************************ 00:06:43.264 END TEST spdk_dd_basic_rw 00:06:43.264 ************************************ 00:06:43.264 14:12:07 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.264 14:12:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.264 14:12:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.264 14:12:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.264 ************************************ 00:06:43.264 START TEST spdk_dd_posix 00:06:43.264 ************************************ 00:06:43.264 14:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.264 * Looking for test storage... 00:06:43.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.264 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.264 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.264 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.524 --rc genhtml_branch_coverage=1 00:06:43.524 --rc genhtml_function_coverage=1 00:06:43.524 --rc genhtml_legend=1 00:06:43.524 --rc geninfo_all_blocks=1 00:06:43.524 --rc geninfo_unexecuted_blocks=1 00:06:43.524 00:06:43.524 ' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.524 --rc genhtml_branch_coverage=1 00:06:43.524 --rc genhtml_function_coverage=1 00:06:43.524 --rc genhtml_legend=1 00:06:43.524 --rc geninfo_all_blocks=1 00:06:43.524 --rc geninfo_unexecuted_blocks=1 00:06:43.524 00:06:43.524 ' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.524 --rc genhtml_branch_coverage=1 00:06:43.524 --rc genhtml_function_coverage=1 00:06:43.524 --rc genhtml_legend=1 00:06:43.524 --rc geninfo_all_blocks=1 00:06:43.524 --rc geninfo_unexecuted_blocks=1 00:06:43.524 00:06:43.524 ' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.524 --rc genhtml_branch_coverage=1 00:06:43.524 --rc genhtml_function_coverage=1 00:06:43.524 --rc genhtml_legend=1 00:06:43.524 --rc geninfo_all_blocks=1 00:06:43.524 --rc geninfo_unexecuted_blocks=1 00:06:43.524 00:06:43.524 ' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:43.524 * First test run, liburing in use 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.524 ************************************ 00:06:43.524 START TEST dd_flag_append 00:06:43.524 ************************************ 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=wxdsb8pv97tmrrp0thglahd1ocrpnvud 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=dnn8gay1h7pgsqcqdkolbe22uiyhizz2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s wxdsb8pv97tmrrp0thglahd1ocrpnvud 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s dnn8gay1h7pgsqcqdkolbe22uiyhizz2 00:06:43.524 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:43.525 [2024-12-10 14:12:08.204598] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:43.525 [2024-12-10 14:12:08.204692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61120 ] 00:06:43.525 [2024-12-10 14:12:08.349483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.783 [2024-12-10 14:12:08.377899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.783 [2024-12-10 14:12:08.407637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.783  [2024-12-10T14:12:08.620Z] Copying: 32/32 [B] (average 31 kBps) 00:06:43.783 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ dnn8gay1h7pgsqcqdkolbe22uiyhizz2wxdsb8pv97tmrrp0thglahd1ocrpnvud == \d\n\n\8\g\a\y\1\h\7\p\g\s\q\c\q\d\k\o\l\b\e\2\2\u\i\y\h\i\z\z\2\w\x\d\s\b\8\p\v\9\7\t\m\r\r\p\0\t\h\g\l\a\h\d\1\o\c\r\p\n\v\u\d ]] 00:06:43.783 00:06:43.783 real 0m0.401s 00:06:43.783 user 0m0.193s 00:06:43.783 sys 0m0.175s 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.783 ************************************ 00:06:43.783 END TEST dd_flag_append 00:06:43.783 ************************************ 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.783 ************************************ 00:06:43.783 START TEST dd_flag_directory 00:06:43.783 ************************************ 00:06:43.783 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.784 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.042 [2024-12-10 14:12:08.653175] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:44.042 [2024-12-10 14:12:08.653285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61143 ] 00:06:44.042 [2024-12-10 14:12:08.798842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.042 [2024-12-10 14:12:08.827849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.042 [2024-12-10 14:12:08.857529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.301 [2024-12-10 14:12:08.878640] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.301 [2024-12-10 14:12:08.878709] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.301 [2024-12-10 14:12:08.878723] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.301 [2024-12-10 14:12:08.940246] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:44.301 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:44.301 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.302 14:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.302 [2024-12-10 14:12:09.047221] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:44.302 [2024-12-10 14:12:09.047317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61152 ] 00:06:44.561 [2024-12-10 14:12:09.191339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.561 [2024-12-10 14:12:09.218850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.561 [2024-12-10 14:12:09.244943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.561 [2024-12-10 14:12:09.263776] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.561 [2024-12-10 14:12:09.263844] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.561 [2024-12-10 14:12:09.263873] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.561 [2024-12-10 14:12:09.328853] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.561 00:06:44.561 real 0m0.785s 00:06:44.561 user 0m0.404s 00:06:44.561 sys 0m0.174s 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.561 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:44.561 ************************************ 00:06:44.561 END TEST dd_flag_directory 00:06:44.561 ************************************ 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.820 ************************************ 00:06:44.820 START TEST dd_flag_nofollow 00:06:44.820 ************************************ 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.820 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.821 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.821 [2024-12-10 14:12:09.495639] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:44.821 [2024-12-10 14:12:09.495748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61181 ] 00:06:44.821 [2024-12-10 14:12:09.639536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.079 [2024-12-10 14:12:09.667697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.079 [2024-12-10 14:12:09.693985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.079 [2024-12-10 14:12:09.711357] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.079 [2024-12-10 14:12:09.711424] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.079 [2024-12-10 14:12:09.711454] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.079 [2024-12-10 14:12:09.769803] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.080 14:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.080 [2024-12-10 14:12:09.858936] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:45.080 [2024-12-10 14:12:09.859058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:06:45.339 [2024-12-10 14:12:09.998890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.339 [2024-12-10 14:12:10.029672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.339 [2024-12-10 14:12:10.056477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.339 [2024-12-10 14:12:10.073853] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.339 [2024-12-10 14:12:10.073918] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.339 [2024-12-10 14:12:10.073946] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.339 [2024-12-10 14:12:10.139151] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:45.599 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.599 [2024-12-10 14:12:10.255560] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:45.599 [2024-12-10 14:12:10.255658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61198 ] 00:06:45.599 [2024-12-10 14:12:10.399537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.599 [2024-12-10 14:12:10.426172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.858 [2024-12-10 14:12:10.453596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.858  [2024-12-10T14:12:10.695Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.858 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0ocwf2vqz5smg2zlz6fhvav17mepb8z51953skvcqokw4i4t8obu4yiks7dqn851am1uq33vt3pynjyiomnq85vsvo4x16wpflmtkre9zhon0dx0x9z96hmvhvwopdc36d4danv0wqkann9fdihjcei1wcj34bap2uwne1ubafy4opsy1qxurnuxlxt47u3mhnsohzp6vxb6o38ajqm8rku2tlcuc74honalj2cqe8vp5zfbh77qs13ol6l2a5c3op87y6bmyd6ol4iyf48fupw03ws2rb1crhf7wh6sfo86hc95mro0oj6ah82z98pxg9gepaxqwtexcid5kfww7q61qsl2vlnjfdqabxrloa76gwvhh8lhincst33lkmeqnlzqhegoqx66z10k7uqi73jg8rw2sta2u93bdaj2hibamg142tod5jfzdg4ka8c480npnmwgswnjqeptfrv9g6zt0coivakhjxawna1btz0ttd4mepr31bvzz98a9om7 == \0\o\c\w\f\2\v\q\z\5\s\m\g\2\z\l\z\6\f\h\v\a\v\1\7\m\e\p\b\8\z\5\1\9\5\3\s\k\v\c\q\o\k\w\4\i\4\t\8\o\b\u\4\y\i\k\s\7\d\q\n\8\5\1\a\m\1\u\q\3\3\v\t\3\p\y\n\j\y\i\o\m\n\q\8\5\v\s\v\o\4\x\1\6\w\p\f\l\m\t\k\r\e\9\z\h\o\n\0\d\x\0\x\9\z\9\6\h\m\v\h\v\w\o\p\d\c\3\6\d\4\d\a\n\v\0\w\q\k\a\n\n\9\f\d\i\h\j\c\e\i\1\w\c\j\3\4\b\a\p\2\u\w\n\e\1\u\b\a\f\y\4\o\p\s\y\1\q\x\u\r\n\u\x\l\x\t\4\7\u\3\m\h\n\s\o\h\z\p\6\v\x\b\6\o\3\8\a\j\q\m\8\r\k\u\2\t\l\c\u\c\7\4\h\o\n\a\l\j\2\c\q\e\8\v\p\5\z\f\b\h\7\7\q\s\1\3\o\l\6\l\2\a\5\c\3\o\p\8\7\y\6\b\m\y\d\6\o\l\4\i\y\f\4\8\f\u\p\w\0\3\w\s\2\r\b\1\c\r\h\f\7\w\h\6\s\f\o\8\6\h\c\9\5\m\r\o\0\o\j\6\a\h\8\2\z\9\8\p\x\g\9\g\e\p\a\x\q\w\t\e\x\c\i\d\5\k\f\w\w\7\q\6\1\q\s\l\2\v\l\n\j\f\d\q\a\b\x\r\l\o\a\7\6\g\w\v\h\h\8\l\h\i\n\c\s\t\3\3\l\k\m\e\q\n\l\z\q\h\e\g\o\q\x\6\6\z\1\0\k\7\u\q\i\7\3\j\g\8\r\w\2\s\t\a\2\u\9\3\b\d\a\j\2\h\i\b\a\m\g\1\4\2\t\o\d\5\j\f\z\d\g\4\k\a\8\c\4\8\0\n\p\n\m\w\g\s\w\n\j\q\e\p\t\f\r\v\9\g\6\z\t\0\c\o\i\v\a\k\h\j\x\a\w\n\a\1\b\t\z\0\t\t\d\4\m\e\p\r\3\1\b\v\z\z\9\8\a\9\o\m\7 ]] 00:06:45.858 00:06:45.858 real 0m1.163s 00:06:45.858 user 0m0.570s 00:06:45.858 sys 0m0.360s 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.858 ************************************ 00:06:45.858 END TEST dd_flag_nofollow 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:45.858 ************************************ 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.858 ************************************ 00:06:45.858 START TEST dd_flag_noatime 00:06:45.858 ************************************ 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733839930 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733839930 00:06:45.858 14:12:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:47.234 14:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.234 [2024-12-10 14:12:11.718983] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:47.234 [2024-12-10 14:12:11.719098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:06:47.234 [2024-12-10 14:12:11.867778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.234 [2024-12-10 14:12:11.906807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.234 [2024-12-10 14:12:11.939225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.234  [2024-12-10T14:12:12.330Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.493 00:06:47.493 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.493 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733839930 )) 00:06:47.493 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.493 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733839930 )) 00:06:47.493 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.493 [2024-12-10 14:12:12.152556] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:47.493 [2024-12-10 14:12:12.152668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:06:47.493 [2024-12-10 14:12:12.298988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.493 [2024-12-10 14:12:12.328309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.752 [2024-12-10 14:12:12.358836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.752  [2024-12-10T14:12:12.589Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.752 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733839932 )) 00:06:47.752 00:06:47.752 real 0m1.855s 00:06:47.752 user 0m0.442s 00:06:47.752 sys 0m0.362s 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.752 ************************************ 00:06:47.752 END TEST dd_flag_noatime 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 ************************************ 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 ************************************ 00:06:47.752 START TEST dd_flags_misc 00:06:47.752 ************************************ 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.752 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:48.011 [2024-12-10 14:12:12.612276] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:48.011 [2024-12-10 14:12:12.612406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:06:48.011 [2024-12-10 14:12:12.759684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.011 [2024-12-10 14:12:12.787434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.011 [2024-12-10 14:12:12.813492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.011  [2024-12-10T14:12:13.107Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.270 00:06:48.270 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t47nbc5j6fatf17b879q29cyn2byt794lynv6z8q8adfv6bpecxd50a9escehb5zilqbx4khldhgn647mpfcevkz05l75bdj846y41nvjrj7nl6lbct4e77avdnnr6f2311aosdjwazxi7mx47ksp6g5m1w5pvnjlyw7aadwrb5oudisnym90ns46021bobcxw1qje0ep3oifv9u2jksj1emfy1rz3m3jfsorahkhgrw3956gjedip5afkb16vrhmhb86glc4nm8w29veqz208vau9jqvqfdghb3dkextxfiv822ui0gbqgrqp66lphm22eex4huowz69kvr4gqtlf3e2qqiokwq0nmkzboqh4r7hjs3xca3zxk45xyegtjwfren8cw2lh7ojr1dbld1wg1tfs28uaefsbztfjxm5uf12o6d0nidvnel0dmv8b59pcc7eorjxyyo579i5qerlzqujsfiwbmn15n9nro2jwpi8g3lrftoaarh597dyzl0 == \t\4\7\n\b\c\5\j\6\f\a\t\f\1\7\b\8\7\9\q\2\9\c\y\n\2\b\y\t\7\9\4\l\y\n\v\6\z\8\q\8\a\d\f\v\6\b\p\e\c\x\d\5\0\a\9\e\s\c\e\h\b\5\z\i\l\q\b\x\4\k\h\l\d\h\g\n\6\4\7\m\p\f\c\e\v\k\z\0\5\l\7\5\b\d\j\8\4\6\y\4\1\n\v\j\r\j\7\n\l\6\l\b\c\t\4\e\7\7\a\v\d\n\n\r\6\f\2\3\1\1\a\o\s\d\j\w\a\z\x\i\7\m\x\4\7\k\s\p\6\g\5\m\1\w\5\p\v\n\j\l\y\w\7\a\a\d\w\r\b\5\o\u\d\i\s\n\y\m\9\0\n\s\4\6\0\2\1\b\o\b\c\x\w\1\q\j\e\0\e\p\3\o\i\f\v\9\u\2\j\k\s\j\1\e\m\f\y\1\r\z\3\m\3\j\f\s\o\r\a\h\k\h\g\r\w\3\9\5\6\g\j\e\d\i\p\5\a\f\k\b\1\6\v\r\h\m\h\b\8\6\g\l\c\4\n\m\8\w\2\9\v\e\q\z\2\0\8\v\a\u\9\j\q\v\q\f\d\g\h\b\3\d\k\e\x\t\x\f\i\v\8\2\2\u\i\0\g\b\q\g\r\q\p\6\6\l\p\h\m\2\2\e\e\x\4\h\u\o\w\z\6\9\k\v\r\4\g\q\t\l\f\3\e\2\q\q\i\o\k\w\q\0\n\m\k\z\b\o\q\h\4\r\7\h\j\s\3\x\c\a\3\z\x\k\4\5\x\y\e\g\t\j\w\f\r\e\n\8\c\w\2\l\h\7\o\j\r\1\d\b\l\d\1\w\g\1\t\f\s\2\8\u\a\e\f\s\b\z\t\f\j\x\m\5\u\f\1\2\o\6\d\0\n\i\d\v\n\e\l\0\d\m\v\8\b\5\9\p\c\c\7\e\o\r\j\x\y\y\o\5\7\9\i\5\q\e\r\l\z\q\u\j\s\f\i\w\b\m\n\1\5\n\9\n\r\o\2\j\w\p\i\8\g\3\l\r\f\t\o\a\a\r\h\5\9\7\d\y\z\l\0 ]] 00:06:48.270 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.270 14:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:48.270 [2024-12-10 14:12:13.001860] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:48.270 [2024-12-10 14:12:13.001999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61281 ] 00:06:48.530 [2024-12-10 14:12:13.147724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.530 [2024-12-10 14:12:13.179719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.530 [2024-12-10 14:12:13.208236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.530  [2024-12-10T14:12:13.367Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.530 00:06:48.530 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t47nbc5j6fatf17b879q29cyn2byt794lynv6z8q8adfv6bpecxd50a9escehb5zilqbx4khldhgn647mpfcevkz05l75bdj846y41nvjrj7nl6lbct4e77avdnnr6f2311aosdjwazxi7mx47ksp6g5m1w5pvnjlyw7aadwrb5oudisnym90ns46021bobcxw1qje0ep3oifv9u2jksj1emfy1rz3m3jfsorahkhgrw3956gjedip5afkb16vrhmhb86glc4nm8w29veqz208vau9jqvqfdghb3dkextxfiv822ui0gbqgrqp66lphm22eex4huowz69kvr4gqtlf3e2qqiokwq0nmkzboqh4r7hjs3xca3zxk45xyegtjwfren8cw2lh7ojr1dbld1wg1tfs28uaefsbztfjxm5uf12o6d0nidvnel0dmv8b59pcc7eorjxyyo579i5qerlzqujsfiwbmn15n9nro2jwpi8g3lrftoaarh597dyzl0 == \t\4\7\n\b\c\5\j\6\f\a\t\f\1\7\b\8\7\9\q\2\9\c\y\n\2\b\y\t\7\9\4\l\y\n\v\6\z\8\q\8\a\d\f\v\6\b\p\e\c\x\d\5\0\a\9\e\s\c\e\h\b\5\z\i\l\q\b\x\4\k\h\l\d\h\g\n\6\4\7\m\p\f\c\e\v\k\z\0\5\l\7\5\b\d\j\8\4\6\y\4\1\n\v\j\r\j\7\n\l\6\l\b\c\t\4\e\7\7\a\v\d\n\n\r\6\f\2\3\1\1\a\o\s\d\j\w\a\z\x\i\7\m\x\4\7\k\s\p\6\g\5\m\1\w\5\p\v\n\j\l\y\w\7\a\a\d\w\r\b\5\o\u\d\i\s\n\y\m\9\0\n\s\4\6\0\2\1\b\o\b\c\x\w\1\q\j\e\0\e\p\3\o\i\f\v\9\u\2\j\k\s\j\1\e\m\f\y\1\r\z\3\m\3\j\f\s\o\r\a\h\k\h\g\r\w\3\9\5\6\g\j\e\d\i\p\5\a\f\k\b\1\6\v\r\h\m\h\b\8\6\g\l\c\4\n\m\8\w\2\9\v\e\q\z\2\0\8\v\a\u\9\j\q\v\q\f\d\g\h\b\3\d\k\e\x\t\x\f\i\v\8\2\2\u\i\0\g\b\q\g\r\q\p\6\6\l\p\h\m\2\2\e\e\x\4\h\u\o\w\z\6\9\k\v\r\4\g\q\t\l\f\3\e\2\q\q\i\o\k\w\q\0\n\m\k\z\b\o\q\h\4\r\7\h\j\s\3\x\c\a\3\z\x\k\4\5\x\y\e\g\t\j\w\f\r\e\n\8\c\w\2\l\h\7\o\j\r\1\d\b\l\d\1\w\g\1\t\f\s\2\8\u\a\e\f\s\b\z\t\f\j\x\m\5\u\f\1\2\o\6\d\0\n\i\d\v\n\e\l\0\d\m\v\8\b\5\9\p\c\c\7\e\o\r\j\x\y\y\o\5\7\9\i\5\q\e\r\l\z\q\u\j\s\f\i\w\b\m\n\1\5\n\9\n\r\o\2\j\w\p\i\8\g\3\l\r\f\t\o\a\a\r\h\5\9\7\d\y\z\l\0 ]] 00:06:48.530 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.530 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:48.789 [2024-12-10 14:12:13.394832] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:48.789 [2024-12-10 14:12:13.394939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:06:48.789 [2024-12-10 14:12:13.530289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.789 [2024-12-10 14:12:13.556980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.789 [2024-12-10 14:12:13.583110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.789  [2024-12-10T14:12:13.886Z] Copying: 512/512 [B] (average 83 kBps) 00:06:49.049 00:06:49.049 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t47nbc5j6fatf17b879q29cyn2byt794lynv6z8q8adfv6bpecxd50a9escehb5zilqbx4khldhgn647mpfcevkz05l75bdj846y41nvjrj7nl6lbct4e77avdnnr6f2311aosdjwazxi7mx47ksp6g5m1w5pvnjlyw7aadwrb5oudisnym90ns46021bobcxw1qje0ep3oifv9u2jksj1emfy1rz3m3jfsorahkhgrw3956gjedip5afkb16vrhmhb86glc4nm8w29veqz208vau9jqvqfdghb3dkextxfiv822ui0gbqgrqp66lphm22eex4huowz69kvr4gqtlf3e2qqiokwq0nmkzboqh4r7hjs3xca3zxk45xyegtjwfren8cw2lh7ojr1dbld1wg1tfs28uaefsbztfjxm5uf12o6d0nidvnel0dmv8b59pcc7eorjxyyo579i5qerlzqujsfiwbmn15n9nro2jwpi8g3lrftoaarh597dyzl0 == \t\4\7\n\b\c\5\j\6\f\a\t\f\1\7\b\8\7\9\q\2\9\c\y\n\2\b\y\t\7\9\4\l\y\n\v\6\z\8\q\8\a\d\f\v\6\b\p\e\c\x\d\5\0\a\9\e\s\c\e\h\b\5\z\i\l\q\b\x\4\k\h\l\d\h\g\n\6\4\7\m\p\f\c\e\v\k\z\0\5\l\7\5\b\d\j\8\4\6\y\4\1\n\v\j\r\j\7\n\l\6\l\b\c\t\4\e\7\7\a\v\d\n\n\r\6\f\2\3\1\1\a\o\s\d\j\w\a\z\x\i\7\m\x\4\7\k\s\p\6\g\5\m\1\w\5\p\v\n\j\l\y\w\7\a\a\d\w\r\b\5\o\u\d\i\s\n\y\m\9\0\n\s\4\6\0\2\1\b\o\b\c\x\w\1\q\j\e\0\e\p\3\o\i\f\v\9\u\2\j\k\s\j\1\e\m\f\y\1\r\z\3\m\3\j\f\s\o\r\a\h\k\h\g\r\w\3\9\5\6\g\j\e\d\i\p\5\a\f\k\b\1\6\v\r\h\m\h\b\8\6\g\l\c\4\n\m\8\w\2\9\v\e\q\z\2\0\8\v\a\u\9\j\q\v\q\f\d\g\h\b\3\d\k\e\x\t\x\f\i\v\8\2\2\u\i\0\g\b\q\g\r\q\p\6\6\l\p\h\m\2\2\e\e\x\4\h\u\o\w\z\6\9\k\v\r\4\g\q\t\l\f\3\e\2\q\q\i\o\k\w\q\0\n\m\k\z\b\o\q\h\4\r\7\h\j\s\3\x\c\a\3\z\x\k\4\5\x\y\e\g\t\j\w\f\r\e\n\8\c\w\2\l\h\7\o\j\r\1\d\b\l\d\1\w\g\1\t\f\s\2\8\u\a\e\f\s\b\z\t\f\j\x\m\5\u\f\1\2\o\6\d\0\n\i\d\v\n\e\l\0\d\m\v\8\b\5\9\p\c\c\7\e\o\r\j\x\y\y\o\5\7\9\i\5\q\e\r\l\z\q\u\j\s\f\i\w\b\m\n\1\5\n\9\n\r\o\2\j\w\p\i\8\g\3\l\r\f\t\o\a\a\r\h\5\9\7\d\y\z\l\0 ]] 00:06:49.049 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.049 14:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:49.049 [2024-12-10 14:12:13.774857] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:49.049 [2024-12-10 14:12:13.774982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:06:49.308 [2024-12-10 14:12:13.918442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.308 [2024-12-10 14:12:13.945076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.308 [2024-12-10 14:12:13.971429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.308  [2024-12-10T14:12:14.145Z] Copying: 512/512 [B] (average 250 kBps) 00:06:49.308 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t47nbc5j6fatf17b879q29cyn2byt794lynv6z8q8adfv6bpecxd50a9escehb5zilqbx4khldhgn647mpfcevkz05l75bdj846y41nvjrj7nl6lbct4e77avdnnr6f2311aosdjwazxi7mx47ksp6g5m1w5pvnjlyw7aadwrb5oudisnym90ns46021bobcxw1qje0ep3oifv9u2jksj1emfy1rz3m3jfsorahkhgrw3956gjedip5afkb16vrhmhb86glc4nm8w29veqz208vau9jqvqfdghb3dkextxfiv822ui0gbqgrqp66lphm22eex4huowz69kvr4gqtlf3e2qqiokwq0nmkzboqh4r7hjs3xca3zxk45xyegtjwfren8cw2lh7ojr1dbld1wg1tfs28uaefsbztfjxm5uf12o6d0nidvnel0dmv8b59pcc7eorjxyyo579i5qerlzqujsfiwbmn15n9nro2jwpi8g3lrftoaarh597dyzl0 == \t\4\7\n\b\c\5\j\6\f\a\t\f\1\7\b\8\7\9\q\2\9\c\y\n\2\b\y\t\7\9\4\l\y\n\v\6\z\8\q\8\a\d\f\v\6\b\p\e\c\x\d\5\0\a\9\e\s\c\e\h\b\5\z\i\l\q\b\x\4\k\h\l\d\h\g\n\6\4\7\m\p\f\c\e\v\k\z\0\5\l\7\5\b\d\j\8\4\6\y\4\1\n\v\j\r\j\7\n\l\6\l\b\c\t\4\e\7\7\a\v\d\n\n\r\6\f\2\3\1\1\a\o\s\d\j\w\a\z\x\i\7\m\x\4\7\k\s\p\6\g\5\m\1\w\5\p\v\n\j\l\y\w\7\a\a\d\w\r\b\5\o\u\d\i\s\n\y\m\9\0\n\s\4\6\0\2\1\b\o\b\c\x\w\1\q\j\e\0\e\p\3\o\i\f\v\9\u\2\j\k\s\j\1\e\m\f\y\1\r\z\3\m\3\j\f\s\o\r\a\h\k\h\g\r\w\3\9\5\6\g\j\e\d\i\p\5\a\f\k\b\1\6\v\r\h\m\h\b\8\6\g\l\c\4\n\m\8\w\2\9\v\e\q\z\2\0\8\v\a\u\9\j\q\v\q\f\d\g\h\b\3\d\k\e\x\t\x\f\i\v\8\2\2\u\i\0\g\b\q\g\r\q\p\6\6\l\p\h\m\2\2\e\e\x\4\h\u\o\w\z\6\9\k\v\r\4\g\q\t\l\f\3\e\2\q\q\i\o\k\w\q\0\n\m\k\z\b\o\q\h\4\r\7\h\j\s\3\x\c\a\3\z\x\k\4\5\x\y\e\g\t\j\w\f\r\e\n\8\c\w\2\l\h\7\o\j\r\1\d\b\l\d\1\w\g\1\t\f\s\2\8\u\a\e\f\s\b\z\t\f\j\x\m\5\u\f\1\2\o\6\d\0\n\i\d\v\n\e\l\0\d\m\v\8\b\5\9\p\c\c\7\e\o\r\j\x\y\y\o\5\7\9\i\5\q\e\r\l\z\q\u\j\s\f\i\w\b\m\n\1\5\n\9\n\r\o\2\j\w\p\i\8\g\3\l\r\f\t\o\a\a\r\h\5\9\7\d\y\z\l\0 ]] 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.308 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:49.567 [2024-12-10 14:12:14.183274] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:49.567 [2024-12-10 14:12:14.183384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61309 ] 00:06:49.567 [2024-12-10 14:12:14.325776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.567 [2024-12-10 14:12:14.353902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.567 [2024-12-10 14:12:14.382204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.826  [2024-12-10T14:12:14.663Z] Copying: 512/512 [B] (average 500 kBps) 00:06:49.826 00:06:49.826 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m4wxzxmcfsy4xnihqo6ky932oira3mf6z5is02wjn9r15k1xzcek1083sh0ga54e05udnta5aer3ora10iuraz0ccawpuoh6qpbx1v1f6bnbpjklmeomo92ld11i4jksq2egfz5iq8qj60gjg6kdk4gowg1lff2p5pwewy39pdh8j9e6ie53wmtrwl73w40xcl9cxw5kgnhxerdgajz1u8pyecssxutxh5ht6xeskr3qa7q4szjqp7bzh42237rva4ikyf3cmqb6be35dvmbaoh1xdfh0806r275o1gszq9nx2rqig43u3kdf9gh0ms252z4az0ossd2j1wawbjitlz3zgom159e77smjjyhd67rl495w32hioputweivfgq77h7z81u02jnkgyaqb9v2bh4d7ngv9x3lasws3gbych06uhg3yprajynx13vedwhb8pjhwlrvmfn7kkyo29rhw915mym2vtyj2fwimbqds3mt28zvey9zziwdp5j6xfn == \m\4\w\x\z\x\m\c\f\s\y\4\x\n\i\h\q\o\6\k\y\9\3\2\o\i\r\a\3\m\f\6\z\5\i\s\0\2\w\j\n\9\r\1\5\k\1\x\z\c\e\k\1\0\8\3\s\h\0\g\a\5\4\e\0\5\u\d\n\t\a\5\a\e\r\3\o\r\a\1\0\i\u\r\a\z\0\c\c\a\w\p\u\o\h\6\q\p\b\x\1\v\1\f\6\b\n\b\p\j\k\l\m\e\o\m\o\9\2\l\d\1\1\i\4\j\k\s\q\2\e\g\f\z\5\i\q\8\q\j\6\0\g\j\g\6\k\d\k\4\g\o\w\g\1\l\f\f\2\p\5\p\w\e\w\y\3\9\p\d\h\8\j\9\e\6\i\e\5\3\w\m\t\r\w\l\7\3\w\4\0\x\c\l\9\c\x\w\5\k\g\n\h\x\e\r\d\g\a\j\z\1\u\8\p\y\e\c\s\s\x\u\t\x\h\5\h\t\6\x\e\s\k\r\3\q\a\7\q\4\s\z\j\q\p\7\b\z\h\4\2\2\3\7\r\v\a\4\i\k\y\f\3\c\m\q\b\6\b\e\3\5\d\v\m\b\a\o\h\1\x\d\f\h\0\8\0\6\r\2\7\5\o\1\g\s\z\q\9\n\x\2\r\q\i\g\4\3\u\3\k\d\f\9\g\h\0\m\s\2\5\2\z\4\a\z\0\o\s\s\d\2\j\1\w\a\w\b\j\i\t\l\z\3\z\g\o\m\1\5\9\e\7\7\s\m\j\j\y\h\d\6\7\r\l\4\9\5\w\3\2\h\i\o\p\u\t\w\e\i\v\f\g\q\7\7\h\7\z\8\1\u\0\2\j\n\k\g\y\a\q\b\9\v\2\b\h\4\d\7\n\g\v\9\x\3\l\a\s\w\s\3\g\b\y\c\h\0\6\u\h\g\3\y\p\r\a\j\y\n\x\1\3\v\e\d\w\h\b\8\p\j\h\w\l\r\v\m\f\n\7\k\k\y\o\2\9\r\h\w\9\1\5\m\y\m\2\v\t\y\j\2\f\w\i\m\b\q\d\s\3\m\t\2\8\z\v\e\y\9\z\z\i\w\d\p\5\j\6\x\f\n ]] 00:06:49.826 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.826 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:49.826 [2024-12-10 14:12:14.570849] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:49.826 [2024-12-10 14:12:14.570986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:06:50.086 [2024-12-10 14:12:14.716670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.086 [2024-12-10 14:12:14.749417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.086 [2024-12-10 14:12:14.776743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.086  [2024-12-10T14:12:14.923Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.086 00:06:50.086 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m4wxzxmcfsy4xnihqo6ky932oira3mf6z5is02wjn9r15k1xzcek1083sh0ga54e05udnta5aer3ora10iuraz0ccawpuoh6qpbx1v1f6bnbpjklmeomo92ld11i4jksq2egfz5iq8qj60gjg6kdk4gowg1lff2p5pwewy39pdh8j9e6ie53wmtrwl73w40xcl9cxw5kgnhxerdgajz1u8pyecssxutxh5ht6xeskr3qa7q4szjqp7bzh42237rva4ikyf3cmqb6be35dvmbaoh1xdfh0806r275o1gszq9nx2rqig43u3kdf9gh0ms252z4az0ossd2j1wawbjitlz3zgom159e77smjjyhd67rl495w32hioputweivfgq77h7z81u02jnkgyaqb9v2bh4d7ngv9x3lasws3gbych06uhg3yprajynx13vedwhb8pjhwlrvmfn7kkyo29rhw915mym2vtyj2fwimbqds3mt28zvey9zziwdp5j6xfn == \m\4\w\x\z\x\m\c\f\s\y\4\x\n\i\h\q\o\6\k\y\9\3\2\o\i\r\a\3\m\f\6\z\5\i\s\0\2\w\j\n\9\r\1\5\k\1\x\z\c\e\k\1\0\8\3\s\h\0\g\a\5\4\e\0\5\u\d\n\t\a\5\a\e\r\3\o\r\a\1\0\i\u\r\a\z\0\c\c\a\w\p\u\o\h\6\q\p\b\x\1\v\1\f\6\b\n\b\p\j\k\l\m\e\o\m\o\9\2\l\d\1\1\i\4\j\k\s\q\2\e\g\f\z\5\i\q\8\q\j\6\0\g\j\g\6\k\d\k\4\g\o\w\g\1\l\f\f\2\p\5\p\w\e\w\y\3\9\p\d\h\8\j\9\e\6\i\e\5\3\w\m\t\r\w\l\7\3\w\4\0\x\c\l\9\c\x\w\5\k\g\n\h\x\e\r\d\g\a\j\z\1\u\8\p\y\e\c\s\s\x\u\t\x\h\5\h\t\6\x\e\s\k\r\3\q\a\7\q\4\s\z\j\q\p\7\b\z\h\4\2\2\3\7\r\v\a\4\i\k\y\f\3\c\m\q\b\6\b\e\3\5\d\v\m\b\a\o\h\1\x\d\f\h\0\8\0\6\r\2\7\5\o\1\g\s\z\q\9\n\x\2\r\q\i\g\4\3\u\3\k\d\f\9\g\h\0\m\s\2\5\2\z\4\a\z\0\o\s\s\d\2\j\1\w\a\w\b\j\i\t\l\z\3\z\g\o\m\1\5\9\e\7\7\s\m\j\j\y\h\d\6\7\r\l\4\9\5\w\3\2\h\i\o\p\u\t\w\e\i\v\f\g\q\7\7\h\7\z\8\1\u\0\2\j\n\k\g\y\a\q\b\9\v\2\b\h\4\d\7\n\g\v\9\x\3\l\a\s\w\s\3\g\b\y\c\h\0\6\u\h\g\3\y\p\r\a\j\y\n\x\1\3\v\e\d\w\h\b\8\p\j\h\w\l\r\v\m\f\n\7\k\k\y\o\2\9\r\h\w\9\1\5\m\y\m\2\v\t\y\j\2\f\w\i\m\b\q\d\s\3\m\t\2\8\z\v\e\y\9\z\z\i\w\d\p\5\j\6\x\f\n ]] 00:06:50.086 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.086 14:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:50.344 [2024-12-10 14:12:14.961641] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:50.344 [2024-12-10 14:12:14.961743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:06:50.344 [2024-12-10 14:12:15.108076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.344 [2024-12-10 14:12:15.138039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.344 [2024-12-10 14:12:15.164203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.602  [2024-12-10T14:12:15.439Z] Copying: 512/512 [B] (average 250 kBps) 00:06:50.602 00:06:50.602 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m4wxzxmcfsy4xnihqo6ky932oira3mf6z5is02wjn9r15k1xzcek1083sh0ga54e05udnta5aer3ora10iuraz0ccawpuoh6qpbx1v1f6bnbpjklmeomo92ld11i4jksq2egfz5iq8qj60gjg6kdk4gowg1lff2p5pwewy39pdh8j9e6ie53wmtrwl73w40xcl9cxw5kgnhxerdgajz1u8pyecssxutxh5ht6xeskr3qa7q4szjqp7bzh42237rva4ikyf3cmqb6be35dvmbaoh1xdfh0806r275o1gszq9nx2rqig43u3kdf9gh0ms252z4az0ossd2j1wawbjitlz3zgom159e77smjjyhd67rl495w32hioputweivfgq77h7z81u02jnkgyaqb9v2bh4d7ngv9x3lasws3gbych06uhg3yprajynx13vedwhb8pjhwlrvmfn7kkyo29rhw915mym2vtyj2fwimbqds3mt28zvey9zziwdp5j6xfn == \m\4\w\x\z\x\m\c\f\s\y\4\x\n\i\h\q\o\6\k\y\9\3\2\o\i\r\a\3\m\f\6\z\5\i\s\0\2\w\j\n\9\r\1\5\k\1\x\z\c\e\k\1\0\8\3\s\h\0\g\a\5\4\e\0\5\u\d\n\t\a\5\a\e\r\3\o\r\a\1\0\i\u\r\a\z\0\c\c\a\w\p\u\o\h\6\q\p\b\x\1\v\1\f\6\b\n\b\p\j\k\l\m\e\o\m\o\9\2\l\d\1\1\i\4\j\k\s\q\2\e\g\f\z\5\i\q\8\q\j\6\0\g\j\g\6\k\d\k\4\g\o\w\g\1\l\f\f\2\p\5\p\w\e\w\y\3\9\p\d\h\8\j\9\e\6\i\e\5\3\w\m\t\r\w\l\7\3\w\4\0\x\c\l\9\c\x\w\5\k\g\n\h\x\e\r\d\g\a\j\z\1\u\8\p\y\e\c\s\s\x\u\t\x\h\5\h\t\6\x\e\s\k\r\3\q\a\7\q\4\s\z\j\q\p\7\b\z\h\4\2\2\3\7\r\v\a\4\i\k\y\f\3\c\m\q\b\6\b\e\3\5\d\v\m\b\a\o\h\1\x\d\f\h\0\8\0\6\r\2\7\5\o\1\g\s\z\q\9\n\x\2\r\q\i\g\4\3\u\3\k\d\f\9\g\h\0\m\s\2\5\2\z\4\a\z\0\o\s\s\d\2\j\1\w\a\w\b\j\i\t\l\z\3\z\g\o\m\1\5\9\e\7\7\s\m\j\j\y\h\d\6\7\r\l\4\9\5\w\3\2\h\i\o\p\u\t\w\e\i\v\f\g\q\7\7\h\7\z\8\1\u\0\2\j\n\k\g\y\a\q\b\9\v\2\b\h\4\d\7\n\g\v\9\x\3\l\a\s\w\s\3\g\b\y\c\h\0\6\u\h\g\3\y\p\r\a\j\y\n\x\1\3\v\e\d\w\h\b\8\p\j\h\w\l\r\v\m\f\n\7\k\k\y\o\2\9\r\h\w\9\1\5\m\y\m\2\v\t\y\j\2\f\w\i\m\b\q\d\s\3\m\t\2\8\z\v\e\y\9\z\z\i\w\d\p\5\j\6\x\f\n ]] 00:06:50.602 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.602 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:50.602 [2024-12-10 14:12:15.330562] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:50.602 [2024-12-10 14:12:15.330645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61338 ] 00:06:50.862 [2024-12-10 14:12:15.465173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.862 [2024-12-10 14:12:15.492484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.862 [2024-12-10 14:12:15.518364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.862  [2024-12-10T14:12:15.699Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.862 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m4wxzxmcfsy4xnihqo6ky932oira3mf6z5is02wjn9r15k1xzcek1083sh0ga54e05udnta5aer3ora10iuraz0ccawpuoh6qpbx1v1f6bnbpjklmeomo92ld11i4jksq2egfz5iq8qj60gjg6kdk4gowg1lff2p5pwewy39pdh8j9e6ie53wmtrwl73w40xcl9cxw5kgnhxerdgajz1u8pyecssxutxh5ht6xeskr3qa7q4szjqp7bzh42237rva4ikyf3cmqb6be35dvmbaoh1xdfh0806r275o1gszq9nx2rqig43u3kdf9gh0ms252z4az0ossd2j1wawbjitlz3zgom159e77smjjyhd67rl495w32hioputweivfgq77h7z81u02jnkgyaqb9v2bh4d7ngv9x3lasws3gbych06uhg3yprajynx13vedwhb8pjhwlrvmfn7kkyo29rhw915mym2vtyj2fwimbqds3mt28zvey9zziwdp5j6xfn == \m\4\w\x\z\x\m\c\f\s\y\4\x\n\i\h\q\o\6\k\y\9\3\2\o\i\r\a\3\m\f\6\z\5\i\s\0\2\w\j\n\9\r\1\5\k\1\x\z\c\e\k\1\0\8\3\s\h\0\g\a\5\4\e\0\5\u\d\n\t\a\5\a\e\r\3\o\r\a\1\0\i\u\r\a\z\0\c\c\a\w\p\u\o\h\6\q\p\b\x\1\v\1\f\6\b\n\b\p\j\k\l\m\e\o\m\o\9\2\l\d\1\1\i\4\j\k\s\q\2\e\g\f\z\5\i\q\8\q\j\6\0\g\j\g\6\k\d\k\4\g\o\w\g\1\l\f\f\2\p\5\p\w\e\w\y\3\9\p\d\h\8\j\9\e\6\i\e\5\3\w\m\t\r\w\l\7\3\w\4\0\x\c\l\9\c\x\w\5\k\g\n\h\x\e\r\d\g\a\j\z\1\u\8\p\y\e\c\s\s\x\u\t\x\h\5\h\t\6\x\e\s\k\r\3\q\a\7\q\4\s\z\j\q\p\7\b\z\h\4\2\2\3\7\r\v\a\4\i\k\y\f\3\c\m\q\b\6\b\e\3\5\d\v\m\b\a\o\h\1\x\d\f\h\0\8\0\6\r\2\7\5\o\1\g\s\z\q\9\n\x\2\r\q\i\g\4\3\u\3\k\d\f\9\g\h\0\m\s\2\5\2\z\4\a\z\0\o\s\s\d\2\j\1\w\a\w\b\j\i\t\l\z\3\z\g\o\m\1\5\9\e\7\7\s\m\j\j\y\h\d\6\7\r\l\4\9\5\w\3\2\h\i\o\p\u\t\w\e\i\v\f\g\q\7\7\h\7\z\8\1\u\0\2\j\n\k\g\y\a\q\b\9\v\2\b\h\4\d\7\n\g\v\9\x\3\l\a\s\w\s\3\g\b\y\c\h\0\6\u\h\g\3\y\p\r\a\j\y\n\x\1\3\v\e\d\w\h\b\8\p\j\h\w\l\r\v\m\f\n\7\k\k\y\o\2\9\r\h\w\9\1\5\m\y\m\2\v\t\y\j\2\f\w\i\m\b\q\d\s\3\m\t\2\8\z\v\e\y\9\z\z\i\w\d\p\5\j\6\x\f\n ]] 00:06:50.862 00:06:50.862 real 0m3.097s 00:06:50.862 user 0m1.568s 00:06:50.862 sys 0m1.297s 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.862 ************************************ 00:06:50.862 END TEST dd_flags_misc 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:50.862 ************************************ 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:50.862 * Second test run, disabling liburing, forcing AIO 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.862 14:12:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.121 ************************************ 00:06:51.121 START TEST dd_flag_append_forced_aio 00:06:51.121 ************************************ 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=9utnxn6l63c3xk3xiyrkkhvclae2tde5 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=uv9lom45nf2ky55xsr2vtf87iyz236lj 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 9utnxn6l63c3xk3xiyrkkhvclae2tde5 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s uv9lom45nf2ky55xsr2vtf87iyz236lj 00:06:51.121 14:12:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:51.121 [2024-12-10 14:12:15.748816] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:51.121 [2024-12-10 14:12:15.748899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61361 ] 00:06:51.121 [2024-12-10 14:12:15.887160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.121 [2024-12-10 14:12:15.914153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.121 [2024-12-10 14:12:15.940282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.381  [2024-12-10T14:12:16.218Z] Copying: 32/32 [B] (average 31 kBps) 00:06:51.381 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ uv9lom45nf2ky55xsr2vtf87iyz236lj9utnxn6l63c3xk3xiyrkkhvclae2tde5 == \u\v\9\l\o\m\4\5\n\f\2\k\y\5\5\x\s\r\2\v\t\f\8\7\i\y\z\2\3\6\l\j\9\u\t\n\x\n\6\l\6\3\c\3\x\k\3\x\i\y\r\k\k\h\v\c\l\a\e\2\t\d\e\5 ]] 00:06:51.381 00:06:51.381 real 0m0.390s 00:06:51.381 user 0m0.192s 00:06:51.381 sys 0m0.081s 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.381 ************************************ 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 END TEST dd_flag_append_forced_aio 00:06:51.381 ************************************ 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 ************************************ 00:06:51.381 START TEST dd_flag_directory_forced_aio 00:06:51.381 ************************************ 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.381 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.381 [2024-12-10 14:12:16.196091] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:51.381 [2024-12-10 14:12:16.196208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:06:51.640 [2024-12-10 14:12:16.339483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.640 [2024-12-10 14:12:16.373596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.640 [2024-12-10 14:12:16.405066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.640 [2024-12-10 14:12:16.426508] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.640 [2024-12-10 14:12:16.426568] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.640 [2024-12-10 14:12:16.426582] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.900 [2024-12-10 14:12:16.498512] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.900 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.900 [2024-12-10 14:12:16.615147] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:51.900 [2024-12-10 14:12:16.615257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61397 ] 00:06:52.159 [2024-12-10 14:12:16.763208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.159 [2024-12-10 14:12:16.797110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.159 [2024-12-10 14:12:16.829572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.159 [2024-12-10 14:12:16.850616] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:52.159 [2024-12-10 14:12:16.850674] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:52.159 [2024-12-10 14:12:16.850688] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.159 [2024-12-10 14:12:16.910624] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.159 00:06:52.159 real 0m0.818s 00:06:52.159 user 0m0.422s 00:06:52.159 sys 0m0.188s 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.159 14:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:52.159 ************************************ 00:06:52.159 END TEST dd_flag_directory_forced_aio 00:06:52.159 ************************************ 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:52.418 ************************************ 00:06:52.418 START TEST dd_flag_nofollow_forced_aio 00:06:52.418 ************************************ 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.418 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.419 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.419 [2024-12-10 14:12:17.080192] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:52.419 [2024-12-10 14:12:17.080284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61420 ] 00:06:52.419 [2024-12-10 14:12:17.225175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.678 [2024-12-10 14:12:17.255307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.678 [2024-12-10 14:12:17.284876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.678 [2024-12-10 14:12:17.302778] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:52.678 [2024-12-10 14:12:17.302863] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:52.678 [2024-12-10 14:12:17.302876] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.678 [2024-12-10 14:12:17.362672] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.678 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.678 [2024-12-10 14:12:17.472521] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:52.678 [2024-12-10 14:12:17.472622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:06:52.937 [2024-12-10 14:12:17.619599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.937 [2024-12-10 14:12:17.647287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.937 [2024-12-10 14:12:17.674116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.937 [2024-12-10 14:12:17.692719] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.937 [2024-12-10 14:12:17.692787] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.937 [2024-12-10 14:12:17.692816] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.937 [2024-12-10 14:12:17.752627] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.196 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:53.197 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:53.197 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.197 14:12:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.197 [2024-12-10 14:12:17.859509] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:53.197 [2024-12-10 14:12:17.859608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61437 ] 00:06:53.197 [2024-12-10 14:12:17.998732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.197 [2024-12-10 14:12:18.026295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.456 [2024-12-10 14:12:18.055498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.456  [2024-12-10T14:12:18.293Z] Copying: 512/512 [B] (average 500 kBps) 00:06:53.456 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 990er7y9n9ixe50msbrxh6idpf1zdejtseopn17tkb71z5czmj9rup6v4pi8d8357hlymjmoak15lusocs0yh45qqtkp44zc34qg4jaypm1rqhcvenq5145h1dz7pztca41qz4162jclpl44pubgjpio0vu3l1jv6z056ettgx73mvsnq2uq51gwrfpidd9liftmkqc2fvdpsfhpa5pvh2yilmlvfg617dii1pjh0v0hfhfta1epcb6q1vyo6l7w0egav9x6nflxdf32kva5dnqc01sbj7sbv4719200ry8em1oidhpba2w44qbszvqq643jvfhr3anilupn99vd5a06uszxn28ozj597khot8z5bnd7iugjvhzp80lsdckbjofqpu19tr4echf4lht5dz8oux5pcny8vhtvpmmu0g9yly1dhnuv97vs4e3nqrk5qimf0m7c0ci8iqzcslmhjqgrbbnm5fm614rgrhapfq8uprfqldgw09x33oogd4ey == \9\9\0\e\r\7\y\9\n\9\i\x\e\5\0\m\s\b\r\x\h\6\i\d\p\f\1\z\d\e\j\t\s\e\o\p\n\1\7\t\k\b\7\1\z\5\c\z\m\j\9\r\u\p\6\v\4\p\i\8\d\8\3\5\7\h\l\y\m\j\m\o\a\k\1\5\l\u\s\o\c\s\0\y\h\4\5\q\q\t\k\p\4\4\z\c\3\4\q\g\4\j\a\y\p\m\1\r\q\h\c\v\e\n\q\5\1\4\5\h\1\d\z\7\p\z\t\c\a\4\1\q\z\4\1\6\2\j\c\l\p\l\4\4\p\u\b\g\j\p\i\o\0\v\u\3\l\1\j\v\6\z\0\5\6\e\t\t\g\x\7\3\m\v\s\n\q\2\u\q\5\1\g\w\r\f\p\i\d\d\9\l\i\f\t\m\k\q\c\2\f\v\d\p\s\f\h\p\a\5\p\v\h\2\y\i\l\m\l\v\f\g\6\1\7\d\i\i\1\p\j\h\0\v\0\h\f\h\f\t\a\1\e\p\c\b\6\q\1\v\y\o\6\l\7\w\0\e\g\a\v\9\x\6\n\f\l\x\d\f\3\2\k\v\a\5\d\n\q\c\0\1\s\b\j\7\s\b\v\4\7\1\9\2\0\0\r\y\8\e\m\1\o\i\d\h\p\b\a\2\w\4\4\q\b\s\z\v\q\q\6\4\3\j\v\f\h\r\3\a\n\i\l\u\p\n\9\9\v\d\5\a\0\6\u\s\z\x\n\2\8\o\z\j\5\9\7\k\h\o\t\8\z\5\b\n\d\7\i\u\g\j\v\h\z\p\8\0\l\s\d\c\k\b\j\o\f\q\p\u\1\9\t\r\4\e\c\h\f\4\l\h\t\5\d\z\8\o\u\x\5\p\c\n\y\8\v\h\t\v\p\m\m\u\0\g\9\y\l\y\1\d\h\n\u\v\9\7\v\s\4\e\3\n\q\r\k\5\q\i\m\f\0\m\7\c\0\c\i\8\i\q\z\c\s\l\m\h\j\q\g\r\b\b\n\m\5\f\m\6\1\4\r\g\r\h\a\p\f\q\8\u\p\r\f\q\l\d\g\w\0\9\x\3\3\o\o\g\d\4\e\y ]] 00:06:53.456 00:06:53.456 real 0m1.194s 00:06:53.456 user 0m0.608s 00:06:53.456 sys 0m0.259s 00:06:53.456 ************************************ 00:06:53.456 END TEST dd_flag_nofollow_forced_aio 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.456 ************************************ 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.456 ************************************ 00:06:53.456 START TEST dd_flag_noatime_forced_aio 00:06:53.456 ************************************ 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:53.456 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.457 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.457 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733839938 00:06:53.457 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.457 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733839938 00:06:53.457 14:12:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:54.834 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.834 [2024-12-10 14:12:19.344255] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:54.834 [2024-12-10 14:12:19.344360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:06:54.834 [2024-12-10 14:12:19.499138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.834 [2024-12-10 14:12:19.544115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.834 [2024-12-10 14:12:19.579635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.834  [2024-12-10T14:12:19.931Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.094 00:06:55.094 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.094 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733839938 )) 00:06:55.094 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.094 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733839938 )) 00:06:55.094 14:12:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.094 [2024-12-10 14:12:19.828741] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:55.094 [2024-12-10 14:12:19.829340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61489 ] 00:06:55.353 [2024-12-10 14:12:19.976949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.353 [2024-12-10 14:12:20.025410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.353 [2024-12-10 14:12:20.058242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.353  [2024-12-10T14:12:20.481Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.644 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733839940 )) 00:06:55.644 00:06:55.644 real 0m1.960s 00:06:55.644 user 0m0.484s 00:06:55.644 sys 0m0.227s 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 END TEST dd_flag_noatime_forced_aio 00:06:55.644 ************************************ 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 START TEST dd_flags_misc_forced_aio 00:06:55.644 ************************************ 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.644 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:55.644 [2024-12-10 14:12:20.340076] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:55.644 [2024-12-10 14:12:20.340162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61510 ] 00:06:55.902 [2024-12-10 14:12:20.485743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.902 [2024-12-10 14:12:20.514688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.902 [2024-12-10 14:12:20.544218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.902  [2024-12-10T14:12:20.739Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.902 00:06:55.902 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5rody5h9cs1uh0hsvf2u5pnujnhzzcshqm49muwb8tzuq31haodr1f0w9yofwpkjsu904bun810ewoimuz78bp7zsve9kbbsd0ewlgl95yjzqjwiym3jq3hnawfy0se7bcvoqr26ucykzrnl5shzzxxrj675phgevtl1772ib7pmof9c7qbb2jvt3ds43hnqrvocscjt90n8db6xguf0i30574jab21j6unb5uzdl7gdonz65slsolnz39x49mtkvr2zccpu0ju8owb4a36h3lzim26a6j0bqtzj8fmf1peq6t64735262xzdl0mbwm0sugv7ib18vlq937e7g7wi8qsqoy0piozzusil9yansmc9it4rtdcbkjvrc8qsr9c0c2zzmsl3rcczc85o44jbjh2i3si8ybbhau7oaf69ewvddw989952sivje0tzfj266x9ksu574agg9dbte0r7vnk9e148im9ux65byrzz6o4e0ug71nh3bzrw7c4914q == \5\r\o\d\y\5\h\9\c\s\1\u\h\0\h\s\v\f\2\u\5\p\n\u\j\n\h\z\z\c\s\h\q\m\4\9\m\u\w\b\8\t\z\u\q\3\1\h\a\o\d\r\1\f\0\w\9\y\o\f\w\p\k\j\s\u\9\0\4\b\u\n\8\1\0\e\w\o\i\m\u\z\7\8\b\p\7\z\s\v\e\9\k\b\b\s\d\0\e\w\l\g\l\9\5\y\j\z\q\j\w\i\y\m\3\j\q\3\h\n\a\w\f\y\0\s\e\7\b\c\v\o\q\r\2\6\u\c\y\k\z\r\n\l\5\s\h\z\z\x\x\r\j\6\7\5\p\h\g\e\v\t\l\1\7\7\2\i\b\7\p\m\o\f\9\c\7\q\b\b\2\j\v\t\3\d\s\4\3\h\n\q\r\v\o\c\s\c\j\t\9\0\n\8\d\b\6\x\g\u\f\0\i\3\0\5\7\4\j\a\b\2\1\j\6\u\n\b\5\u\z\d\l\7\g\d\o\n\z\6\5\s\l\s\o\l\n\z\3\9\x\4\9\m\t\k\v\r\2\z\c\c\p\u\0\j\u\8\o\w\b\4\a\3\6\h\3\l\z\i\m\2\6\a\6\j\0\b\q\t\z\j\8\f\m\f\1\p\e\q\6\t\6\4\7\3\5\2\6\2\x\z\d\l\0\m\b\w\m\0\s\u\g\v\7\i\b\1\8\v\l\q\9\3\7\e\7\g\7\w\i\8\q\s\q\o\y\0\p\i\o\z\z\u\s\i\l\9\y\a\n\s\m\c\9\i\t\4\r\t\d\c\b\k\j\v\r\c\8\q\s\r\9\c\0\c\2\z\z\m\s\l\3\r\c\c\z\c\8\5\o\4\4\j\b\j\h\2\i\3\s\i\8\y\b\b\h\a\u\7\o\a\f\6\9\e\w\v\d\d\w\9\8\9\9\5\2\s\i\v\j\e\0\t\z\f\j\2\6\6\x\9\k\s\u\5\7\4\a\g\g\9\d\b\t\e\0\r\7\v\n\k\9\e\1\4\8\i\m\9\u\x\6\5\b\y\r\z\z\6\o\4\e\0\u\g\7\1\n\h\3\b\z\r\w\7\c\4\9\1\4\q ]] 00:06:55.902 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.902 14:12:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:56.162 [2024-12-10 14:12:20.758824] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:56.162 [2024-12-10 14:12:20.759127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61523 ] 00:06:56.162 [2024-12-10 14:12:20.904074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.162 [2024-12-10 14:12:20.932207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.162 [2024-12-10 14:12:20.959172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.162  [2024-12-10T14:12:21.258Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.421 00:06:56.421 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5rody5h9cs1uh0hsvf2u5pnujnhzzcshqm49muwb8tzuq31haodr1f0w9yofwpkjsu904bun810ewoimuz78bp7zsve9kbbsd0ewlgl95yjzqjwiym3jq3hnawfy0se7bcvoqr26ucykzrnl5shzzxxrj675phgevtl1772ib7pmof9c7qbb2jvt3ds43hnqrvocscjt90n8db6xguf0i30574jab21j6unb5uzdl7gdonz65slsolnz39x49mtkvr2zccpu0ju8owb4a36h3lzim26a6j0bqtzj8fmf1peq6t64735262xzdl0mbwm0sugv7ib18vlq937e7g7wi8qsqoy0piozzusil9yansmc9it4rtdcbkjvrc8qsr9c0c2zzmsl3rcczc85o44jbjh2i3si8ybbhau7oaf69ewvddw989952sivje0tzfj266x9ksu574agg9dbte0r7vnk9e148im9ux65byrzz6o4e0ug71nh3bzrw7c4914q == \5\r\o\d\y\5\h\9\c\s\1\u\h\0\h\s\v\f\2\u\5\p\n\u\j\n\h\z\z\c\s\h\q\m\4\9\m\u\w\b\8\t\z\u\q\3\1\h\a\o\d\r\1\f\0\w\9\y\o\f\w\p\k\j\s\u\9\0\4\b\u\n\8\1\0\e\w\o\i\m\u\z\7\8\b\p\7\z\s\v\e\9\k\b\b\s\d\0\e\w\l\g\l\9\5\y\j\z\q\j\w\i\y\m\3\j\q\3\h\n\a\w\f\y\0\s\e\7\b\c\v\o\q\r\2\6\u\c\y\k\z\r\n\l\5\s\h\z\z\x\x\r\j\6\7\5\p\h\g\e\v\t\l\1\7\7\2\i\b\7\p\m\o\f\9\c\7\q\b\b\2\j\v\t\3\d\s\4\3\h\n\q\r\v\o\c\s\c\j\t\9\0\n\8\d\b\6\x\g\u\f\0\i\3\0\5\7\4\j\a\b\2\1\j\6\u\n\b\5\u\z\d\l\7\g\d\o\n\z\6\5\s\l\s\o\l\n\z\3\9\x\4\9\m\t\k\v\r\2\z\c\c\p\u\0\j\u\8\o\w\b\4\a\3\6\h\3\l\z\i\m\2\6\a\6\j\0\b\q\t\z\j\8\f\m\f\1\p\e\q\6\t\6\4\7\3\5\2\6\2\x\z\d\l\0\m\b\w\m\0\s\u\g\v\7\i\b\1\8\v\l\q\9\3\7\e\7\g\7\w\i\8\q\s\q\o\y\0\p\i\o\z\z\u\s\i\l\9\y\a\n\s\m\c\9\i\t\4\r\t\d\c\b\k\j\v\r\c\8\q\s\r\9\c\0\c\2\z\z\m\s\l\3\r\c\c\z\c\8\5\o\4\4\j\b\j\h\2\i\3\s\i\8\y\b\b\h\a\u\7\o\a\f\6\9\e\w\v\d\d\w\9\8\9\9\5\2\s\i\v\j\e\0\t\z\f\j\2\6\6\x\9\k\s\u\5\7\4\a\g\g\9\d\b\t\e\0\r\7\v\n\k\9\e\1\4\8\i\m\9\u\x\6\5\b\y\r\z\z\6\o\4\e\0\u\g\7\1\n\h\3\b\z\r\w\7\c\4\9\1\4\q ]] 00:06:56.421 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.421 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:56.421 [2024-12-10 14:12:21.179134] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:56.421 [2024-12-10 14:12:21.179225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ] 00:06:56.681 [2024-12-10 14:12:21.323061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.681 [2024-12-10 14:12:21.354625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.681 [2024-12-10 14:12:21.385302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.681  [2024-12-10T14:12:21.777Z] Copying: 512/512 [B] (average 250 kBps) 00:06:56.940 00:06:56.940 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5rody5h9cs1uh0hsvf2u5pnujnhzzcshqm49muwb8tzuq31haodr1f0w9yofwpkjsu904bun810ewoimuz78bp7zsve9kbbsd0ewlgl95yjzqjwiym3jq3hnawfy0se7bcvoqr26ucykzrnl5shzzxxrj675phgevtl1772ib7pmof9c7qbb2jvt3ds43hnqrvocscjt90n8db6xguf0i30574jab21j6unb5uzdl7gdonz65slsolnz39x49mtkvr2zccpu0ju8owb4a36h3lzim26a6j0bqtzj8fmf1peq6t64735262xzdl0mbwm0sugv7ib18vlq937e7g7wi8qsqoy0piozzusil9yansmc9it4rtdcbkjvrc8qsr9c0c2zzmsl3rcczc85o44jbjh2i3si8ybbhau7oaf69ewvddw989952sivje0tzfj266x9ksu574agg9dbte0r7vnk9e148im9ux65byrzz6o4e0ug71nh3bzrw7c4914q == \5\r\o\d\y\5\h\9\c\s\1\u\h\0\h\s\v\f\2\u\5\p\n\u\j\n\h\z\z\c\s\h\q\m\4\9\m\u\w\b\8\t\z\u\q\3\1\h\a\o\d\r\1\f\0\w\9\y\o\f\w\p\k\j\s\u\9\0\4\b\u\n\8\1\0\e\w\o\i\m\u\z\7\8\b\p\7\z\s\v\e\9\k\b\b\s\d\0\e\w\l\g\l\9\5\y\j\z\q\j\w\i\y\m\3\j\q\3\h\n\a\w\f\y\0\s\e\7\b\c\v\o\q\r\2\6\u\c\y\k\z\r\n\l\5\s\h\z\z\x\x\r\j\6\7\5\p\h\g\e\v\t\l\1\7\7\2\i\b\7\p\m\o\f\9\c\7\q\b\b\2\j\v\t\3\d\s\4\3\h\n\q\r\v\o\c\s\c\j\t\9\0\n\8\d\b\6\x\g\u\f\0\i\3\0\5\7\4\j\a\b\2\1\j\6\u\n\b\5\u\z\d\l\7\g\d\o\n\z\6\5\s\l\s\o\l\n\z\3\9\x\4\9\m\t\k\v\r\2\z\c\c\p\u\0\j\u\8\o\w\b\4\a\3\6\h\3\l\z\i\m\2\6\a\6\j\0\b\q\t\z\j\8\f\m\f\1\p\e\q\6\t\6\4\7\3\5\2\6\2\x\z\d\l\0\m\b\w\m\0\s\u\g\v\7\i\b\1\8\v\l\q\9\3\7\e\7\g\7\w\i\8\q\s\q\o\y\0\p\i\o\z\z\u\s\i\l\9\y\a\n\s\m\c\9\i\t\4\r\t\d\c\b\k\j\v\r\c\8\q\s\r\9\c\0\c\2\z\z\m\s\l\3\r\c\c\z\c\8\5\o\4\4\j\b\j\h\2\i\3\s\i\8\y\b\b\h\a\u\7\o\a\f\6\9\e\w\v\d\d\w\9\8\9\9\5\2\s\i\v\j\e\0\t\z\f\j\2\6\6\x\9\k\s\u\5\7\4\a\g\g\9\d\b\t\e\0\r\7\v\n\k\9\e\1\4\8\i\m\9\u\x\6\5\b\y\r\z\z\6\o\4\e\0\u\g\7\1\n\h\3\b\z\r\w\7\c\4\9\1\4\q ]] 00:06:56.940 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.940 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:56.940 [2024-12-10 14:12:21.595279] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:56.940 [2024-12-10 14:12:21.595368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61538 ] 00:06:56.940 [2024-12-10 14:12:21.738377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.940 [2024-12-10 14:12:21.765034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.200 [2024-12-10 14:12:21.792260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.200  [2024-12-10T14:12:22.037Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.200 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5rody5h9cs1uh0hsvf2u5pnujnhzzcshqm49muwb8tzuq31haodr1f0w9yofwpkjsu904bun810ewoimuz78bp7zsve9kbbsd0ewlgl95yjzqjwiym3jq3hnawfy0se7bcvoqr26ucykzrnl5shzzxxrj675phgevtl1772ib7pmof9c7qbb2jvt3ds43hnqrvocscjt90n8db6xguf0i30574jab21j6unb5uzdl7gdonz65slsolnz39x49mtkvr2zccpu0ju8owb4a36h3lzim26a6j0bqtzj8fmf1peq6t64735262xzdl0mbwm0sugv7ib18vlq937e7g7wi8qsqoy0piozzusil9yansmc9it4rtdcbkjvrc8qsr9c0c2zzmsl3rcczc85o44jbjh2i3si8ybbhau7oaf69ewvddw989952sivje0tzfj266x9ksu574agg9dbte0r7vnk9e148im9ux65byrzz6o4e0ug71nh3bzrw7c4914q == \5\r\o\d\y\5\h\9\c\s\1\u\h\0\h\s\v\f\2\u\5\p\n\u\j\n\h\z\z\c\s\h\q\m\4\9\m\u\w\b\8\t\z\u\q\3\1\h\a\o\d\r\1\f\0\w\9\y\o\f\w\p\k\j\s\u\9\0\4\b\u\n\8\1\0\e\w\o\i\m\u\z\7\8\b\p\7\z\s\v\e\9\k\b\b\s\d\0\e\w\l\g\l\9\5\y\j\z\q\j\w\i\y\m\3\j\q\3\h\n\a\w\f\y\0\s\e\7\b\c\v\o\q\r\2\6\u\c\y\k\z\r\n\l\5\s\h\z\z\x\x\r\j\6\7\5\p\h\g\e\v\t\l\1\7\7\2\i\b\7\p\m\o\f\9\c\7\q\b\b\2\j\v\t\3\d\s\4\3\h\n\q\r\v\o\c\s\c\j\t\9\0\n\8\d\b\6\x\g\u\f\0\i\3\0\5\7\4\j\a\b\2\1\j\6\u\n\b\5\u\z\d\l\7\g\d\o\n\z\6\5\s\l\s\o\l\n\z\3\9\x\4\9\m\t\k\v\r\2\z\c\c\p\u\0\j\u\8\o\w\b\4\a\3\6\h\3\l\z\i\m\2\6\a\6\j\0\b\q\t\z\j\8\f\m\f\1\p\e\q\6\t\6\4\7\3\5\2\6\2\x\z\d\l\0\m\b\w\m\0\s\u\g\v\7\i\b\1\8\v\l\q\9\3\7\e\7\g\7\w\i\8\q\s\q\o\y\0\p\i\o\z\z\u\s\i\l\9\y\a\n\s\m\c\9\i\t\4\r\t\d\c\b\k\j\v\r\c\8\q\s\r\9\c\0\c\2\z\z\m\s\l\3\r\c\c\z\c\8\5\o\4\4\j\b\j\h\2\i\3\s\i\8\y\b\b\h\a\u\7\o\a\f\6\9\e\w\v\d\d\w\9\8\9\9\5\2\s\i\v\j\e\0\t\z\f\j\2\6\6\x\9\k\s\u\5\7\4\a\g\g\9\d\b\t\e\0\r\7\v\n\k\9\e\1\4\8\i\m\9\u\x\6\5\b\y\r\z\z\6\o\4\e\0\u\g\7\1\n\h\3\b\z\r\w\7\c\4\9\1\4\q ]] 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.200 14:12:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:57.200 [2024-12-10 14:12:22.014099] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:57.200 [2024-12-10 14:12:22.014206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61540 ] 00:06:57.459 [2024-12-10 14:12:22.160493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.459 [2024-12-10 14:12:22.195510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.459 [2024-12-10 14:12:22.227588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.459  [2024-12-10T14:12:22.556Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.719 00:06:57.719 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pd10h0eaitad6eqg6a571u84a4a6tjow00a93a5qesfgzgh3402rl8sh4iszh7odk8jvcyc7u2vmtqthwtv8v10zd65d3zan1el3v2nb1w01i9x54u4e8xpm2lab1kh6eb1i7czu6sl0yblhld96oj3111v6fyg8fhf8h3gcs1ay8l6bc43glkp2trojer7ydwk8k7id0e310su767hxe4gnvxk4iek0vj49gohgm87gebjrdt7lbcu2bo3qutq6pc53fl2zgbljek8yyf7pybbde47hae3ydm2sk4y3ztbj3ku8cx1qi365vbuhok1hws3q0nzu7i0wfban4n2xbir2adadp1ve1iopo7qjq1j2ql1iwg0y0wqnlda643ckoul9zy6aacq6y2t52osjtedwu0mkivvhsu9cr0045w0vfoznw4wfyew5dbga9hlg44sid8widrk98cgm604pldob0cq9uv89v6akcv1ibmk7fkmilvyk2t29kz3ihep5 == \p\d\1\0\h\0\e\a\i\t\a\d\6\e\q\g\6\a\5\7\1\u\8\4\a\4\a\6\t\j\o\w\0\0\a\9\3\a\5\q\e\s\f\g\z\g\h\3\4\0\2\r\l\8\s\h\4\i\s\z\h\7\o\d\k\8\j\v\c\y\c\7\u\2\v\m\t\q\t\h\w\t\v\8\v\1\0\z\d\6\5\d\3\z\a\n\1\e\l\3\v\2\n\b\1\w\0\1\i\9\x\5\4\u\4\e\8\x\p\m\2\l\a\b\1\k\h\6\e\b\1\i\7\c\z\u\6\s\l\0\y\b\l\h\l\d\9\6\o\j\3\1\1\1\v\6\f\y\g\8\f\h\f\8\h\3\g\c\s\1\a\y\8\l\6\b\c\4\3\g\l\k\p\2\t\r\o\j\e\r\7\y\d\w\k\8\k\7\i\d\0\e\3\1\0\s\u\7\6\7\h\x\e\4\g\n\v\x\k\4\i\e\k\0\v\j\4\9\g\o\h\g\m\8\7\g\e\b\j\r\d\t\7\l\b\c\u\2\b\o\3\q\u\t\q\6\p\c\5\3\f\l\2\z\g\b\l\j\e\k\8\y\y\f\7\p\y\b\b\d\e\4\7\h\a\e\3\y\d\m\2\s\k\4\y\3\z\t\b\j\3\k\u\8\c\x\1\q\i\3\6\5\v\b\u\h\o\k\1\h\w\s\3\q\0\n\z\u\7\i\0\w\f\b\a\n\4\n\2\x\b\i\r\2\a\d\a\d\p\1\v\e\1\i\o\p\o\7\q\j\q\1\j\2\q\l\1\i\w\g\0\y\0\w\q\n\l\d\a\6\4\3\c\k\o\u\l\9\z\y\6\a\a\c\q\6\y\2\t\5\2\o\s\j\t\e\d\w\u\0\m\k\i\v\v\h\s\u\9\c\r\0\0\4\5\w\0\v\f\o\z\n\w\4\w\f\y\e\w\5\d\b\g\a\9\h\l\g\4\4\s\i\d\8\w\i\d\r\k\9\8\c\g\m\6\0\4\p\l\d\o\b\0\c\q\9\u\v\8\9\v\6\a\k\c\v\1\i\b\m\k\7\f\k\m\i\l\v\y\k\2\t\2\9\k\z\3\i\h\e\p\5 ]] 00:06:57.719 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.719 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:57.719 [2024-12-10 14:12:22.465932] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:57.719 [2024-12-10 14:12:22.466042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61552 ] 00:06:57.978 [2024-12-10 14:12:22.611128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.978 [2024-12-10 14:12:22.647369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.978 [2024-12-10 14:12:22.680056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.978  [2024-12-10T14:12:23.074Z] Copying: 512/512 [B] (average 500 kBps) 00:06:58.237 00:06:58.237 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pd10h0eaitad6eqg6a571u84a4a6tjow00a93a5qesfgzgh3402rl8sh4iszh7odk8jvcyc7u2vmtqthwtv8v10zd65d3zan1el3v2nb1w01i9x54u4e8xpm2lab1kh6eb1i7czu6sl0yblhld96oj3111v6fyg8fhf8h3gcs1ay8l6bc43glkp2trojer7ydwk8k7id0e310su767hxe4gnvxk4iek0vj49gohgm87gebjrdt7lbcu2bo3qutq6pc53fl2zgbljek8yyf7pybbde47hae3ydm2sk4y3ztbj3ku8cx1qi365vbuhok1hws3q0nzu7i0wfban4n2xbir2adadp1ve1iopo7qjq1j2ql1iwg0y0wqnlda643ckoul9zy6aacq6y2t52osjtedwu0mkivvhsu9cr0045w0vfoznw4wfyew5dbga9hlg44sid8widrk98cgm604pldob0cq9uv89v6akcv1ibmk7fkmilvyk2t29kz3ihep5 == \p\d\1\0\h\0\e\a\i\t\a\d\6\e\q\g\6\a\5\7\1\u\8\4\a\4\a\6\t\j\o\w\0\0\a\9\3\a\5\q\e\s\f\g\z\g\h\3\4\0\2\r\l\8\s\h\4\i\s\z\h\7\o\d\k\8\j\v\c\y\c\7\u\2\v\m\t\q\t\h\w\t\v\8\v\1\0\z\d\6\5\d\3\z\a\n\1\e\l\3\v\2\n\b\1\w\0\1\i\9\x\5\4\u\4\e\8\x\p\m\2\l\a\b\1\k\h\6\e\b\1\i\7\c\z\u\6\s\l\0\y\b\l\h\l\d\9\6\o\j\3\1\1\1\v\6\f\y\g\8\f\h\f\8\h\3\g\c\s\1\a\y\8\l\6\b\c\4\3\g\l\k\p\2\t\r\o\j\e\r\7\y\d\w\k\8\k\7\i\d\0\e\3\1\0\s\u\7\6\7\h\x\e\4\g\n\v\x\k\4\i\e\k\0\v\j\4\9\g\o\h\g\m\8\7\g\e\b\j\r\d\t\7\l\b\c\u\2\b\o\3\q\u\t\q\6\p\c\5\3\f\l\2\z\g\b\l\j\e\k\8\y\y\f\7\p\y\b\b\d\e\4\7\h\a\e\3\y\d\m\2\s\k\4\y\3\z\t\b\j\3\k\u\8\c\x\1\q\i\3\6\5\v\b\u\h\o\k\1\h\w\s\3\q\0\n\z\u\7\i\0\w\f\b\a\n\4\n\2\x\b\i\r\2\a\d\a\d\p\1\v\e\1\i\o\p\o\7\q\j\q\1\j\2\q\l\1\i\w\g\0\y\0\w\q\n\l\d\a\6\4\3\c\k\o\u\l\9\z\y\6\a\a\c\q\6\y\2\t\5\2\o\s\j\t\e\d\w\u\0\m\k\i\v\v\h\s\u\9\c\r\0\0\4\5\w\0\v\f\o\z\n\w\4\w\f\y\e\w\5\d\b\g\a\9\h\l\g\4\4\s\i\d\8\w\i\d\r\k\9\8\c\g\m\6\0\4\p\l\d\o\b\0\c\q\9\u\v\8\9\v\6\a\k\c\v\1\i\b\m\k\7\f\k\m\i\l\v\y\k\2\t\2\9\k\z\3\i\h\e\p\5 ]] 00:06:58.237 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.237 14:12:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:58.237 [2024-12-10 14:12:22.902823] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:58.237 [2024-12-10 14:12:22.902939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:06:58.238 [2024-12-10 14:12:23.047694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.497 [2024-12-10 14:12:23.076789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.497 [2024-12-10 14:12:23.103436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.497  [2024-12-10T14:12:23.334Z] Copying: 512/512 [B] (average 166 kBps) 00:06:58.497 00:06:58.497 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pd10h0eaitad6eqg6a571u84a4a6tjow00a93a5qesfgzgh3402rl8sh4iszh7odk8jvcyc7u2vmtqthwtv8v10zd65d3zan1el3v2nb1w01i9x54u4e8xpm2lab1kh6eb1i7czu6sl0yblhld96oj3111v6fyg8fhf8h3gcs1ay8l6bc43glkp2trojer7ydwk8k7id0e310su767hxe4gnvxk4iek0vj49gohgm87gebjrdt7lbcu2bo3qutq6pc53fl2zgbljek8yyf7pybbde47hae3ydm2sk4y3ztbj3ku8cx1qi365vbuhok1hws3q0nzu7i0wfban4n2xbir2adadp1ve1iopo7qjq1j2ql1iwg0y0wqnlda643ckoul9zy6aacq6y2t52osjtedwu0mkivvhsu9cr0045w0vfoznw4wfyew5dbga9hlg44sid8widrk98cgm604pldob0cq9uv89v6akcv1ibmk7fkmilvyk2t29kz3ihep5 == \p\d\1\0\h\0\e\a\i\t\a\d\6\e\q\g\6\a\5\7\1\u\8\4\a\4\a\6\t\j\o\w\0\0\a\9\3\a\5\q\e\s\f\g\z\g\h\3\4\0\2\r\l\8\s\h\4\i\s\z\h\7\o\d\k\8\j\v\c\y\c\7\u\2\v\m\t\q\t\h\w\t\v\8\v\1\0\z\d\6\5\d\3\z\a\n\1\e\l\3\v\2\n\b\1\w\0\1\i\9\x\5\4\u\4\e\8\x\p\m\2\l\a\b\1\k\h\6\e\b\1\i\7\c\z\u\6\s\l\0\y\b\l\h\l\d\9\6\o\j\3\1\1\1\v\6\f\y\g\8\f\h\f\8\h\3\g\c\s\1\a\y\8\l\6\b\c\4\3\g\l\k\p\2\t\r\o\j\e\r\7\y\d\w\k\8\k\7\i\d\0\e\3\1\0\s\u\7\6\7\h\x\e\4\g\n\v\x\k\4\i\e\k\0\v\j\4\9\g\o\h\g\m\8\7\g\e\b\j\r\d\t\7\l\b\c\u\2\b\o\3\q\u\t\q\6\p\c\5\3\f\l\2\z\g\b\l\j\e\k\8\y\y\f\7\p\y\b\b\d\e\4\7\h\a\e\3\y\d\m\2\s\k\4\y\3\z\t\b\j\3\k\u\8\c\x\1\q\i\3\6\5\v\b\u\h\o\k\1\h\w\s\3\q\0\n\z\u\7\i\0\w\f\b\a\n\4\n\2\x\b\i\r\2\a\d\a\d\p\1\v\e\1\i\o\p\o\7\q\j\q\1\j\2\q\l\1\i\w\g\0\y\0\w\q\n\l\d\a\6\4\3\c\k\o\u\l\9\z\y\6\a\a\c\q\6\y\2\t\5\2\o\s\j\t\e\d\w\u\0\m\k\i\v\v\h\s\u\9\c\r\0\0\4\5\w\0\v\f\o\z\n\w\4\w\f\y\e\w\5\d\b\g\a\9\h\l\g\4\4\s\i\d\8\w\i\d\r\k\9\8\c\g\m\6\0\4\p\l\d\o\b\0\c\q\9\u\v\8\9\v\6\a\k\c\v\1\i\b\m\k\7\f\k\m\i\l\v\y\k\2\t\2\9\k\z\3\i\h\e\p\5 ]] 00:06:58.497 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.497 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:58.497 [2024-12-10 14:12:23.324924] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:58.497 [2024-12-10 14:12:23.325030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61557 ] 00:06:58.756 [2024-12-10 14:12:23.471477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.756 [2024-12-10 14:12:23.503746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.756 [2024-12-10 14:12:23.531533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.756  [2024-12-10T14:12:23.852Z] Copying: 512/512 [B] (average 250 kBps) 00:06:59.015 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pd10h0eaitad6eqg6a571u84a4a6tjow00a93a5qesfgzgh3402rl8sh4iszh7odk8jvcyc7u2vmtqthwtv8v10zd65d3zan1el3v2nb1w01i9x54u4e8xpm2lab1kh6eb1i7czu6sl0yblhld96oj3111v6fyg8fhf8h3gcs1ay8l6bc43glkp2trojer7ydwk8k7id0e310su767hxe4gnvxk4iek0vj49gohgm87gebjrdt7lbcu2bo3qutq6pc53fl2zgbljek8yyf7pybbde47hae3ydm2sk4y3ztbj3ku8cx1qi365vbuhok1hws3q0nzu7i0wfban4n2xbir2adadp1ve1iopo7qjq1j2ql1iwg0y0wqnlda643ckoul9zy6aacq6y2t52osjtedwu0mkivvhsu9cr0045w0vfoznw4wfyew5dbga9hlg44sid8widrk98cgm604pldob0cq9uv89v6akcv1ibmk7fkmilvyk2t29kz3ihep5 == \p\d\1\0\h\0\e\a\i\t\a\d\6\e\q\g\6\a\5\7\1\u\8\4\a\4\a\6\t\j\o\w\0\0\a\9\3\a\5\q\e\s\f\g\z\g\h\3\4\0\2\r\l\8\s\h\4\i\s\z\h\7\o\d\k\8\j\v\c\y\c\7\u\2\v\m\t\q\t\h\w\t\v\8\v\1\0\z\d\6\5\d\3\z\a\n\1\e\l\3\v\2\n\b\1\w\0\1\i\9\x\5\4\u\4\e\8\x\p\m\2\l\a\b\1\k\h\6\e\b\1\i\7\c\z\u\6\s\l\0\y\b\l\h\l\d\9\6\o\j\3\1\1\1\v\6\f\y\g\8\f\h\f\8\h\3\g\c\s\1\a\y\8\l\6\b\c\4\3\g\l\k\p\2\t\r\o\j\e\r\7\y\d\w\k\8\k\7\i\d\0\e\3\1\0\s\u\7\6\7\h\x\e\4\g\n\v\x\k\4\i\e\k\0\v\j\4\9\g\o\h\g\m\8\7\g\e\b\j\r\d\t\7\l\b\c\u\2\b\o\3\q\u\t\q\6\p\c\5\3\f\l\2\z\g\b\l\j\e\k\8\y\y\f\7\p\y\b\b\d\e\4\7\h\a\e\3\y\d\m\2\s\k\4\y\3\z\t\b\j\3\k\u\8\c\x\1\q\i\3\6\5\v\b\u\h\o\k\1\h\w\s\3\q\0\n\z\u\7\i\0\w\f\b\a\n\4\n\2\x\b\i\r\2\a\d\a\d\p\1\v\e\1\i\o\p\o\7\q\j\q\1\j\2\q\l\1\i\w\g\0\y\0\w\q\n\l\d\a\6\4\3\c\k\o\u\l\9\z\y\6\a\a\c\q\6\y\2\t\5\2\o\s\j\t\e\d\w\u\0\m\k\i\v\v\h\s\u\9\c\r\0\0\4\5\w\0\v\f\o\z\n\w\4\w\f\y\e\w\5\d\b\g\a\9\h\l\g\4\4\s\i\d\8\w\i\d\r\k\9\8\c\g\m\6\0\4\p\l\d\o\b\0\c\q\9\u\v\8\9\v\6\a\k\c\v\1\i\b\m\k\7\f\k\m\i\l\v\y\k\2\t\2\9\k\z\3\i\h\e\p\5 ]] 00:06:59.016 00:06:59.016 real 0m3.404s 00:06:59.016 user 0m1.680s 00:06:59.016 sys 0m0.749s 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.016 ************************************ 00:06:59.016 END TEST dd_flags_misc_forced_aio 00:06:59.016 ************************************ 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:59.016 00:06:59.016 real 0m15.796s 00:06:59.016 user 0m6.854s 00:06:59.016 sys 0m4.255s 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.016 14:12:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.016 ************************************ 00:06:59.016 END TEST spdk_dd_posix 00:06:59.016 ************************************ 00:06:59.016 14:12:23 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:59.016 14:12:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.016 14:12:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.016 14:12:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.016 ************************************ 00:06:59.016 START TEST spdk_dd_malloc 00:06:59.016 ************************************ 00:06:59.016 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:59.275 * Looking for test storage... 00:06:59.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.275 --rc geninfo_unexecuted_blocks=1 00:06:59.275 00:06:59.275 ' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.275 --rc geninfo_unexecuted_blocks=1 00:06:59.275 00:06:59.275 ' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.275 --rc geninfo_unexecuted_blocks=1 00:06:59.275 00:06:59.275 ' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.275 --rc geninfo_unexecuted_blocks=1 00:06:59.275 00:06:59.275 ' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:59.275 ************************************ 00:06:59.275 START TEST dd_malloc_copy 00:06:59.275 ************************************ 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:59.275 14:12:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.275 [2024-12-10 14:12:24.035136] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:06:59.275 [2024-12-10 14:12:24.035242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:06:59.275 { 00:06:59.275 "subsystems": [ 00:06:59.275 { 00:06:59.275 "subsystem": "bdev", 00:06:59.275 "config": [ 00:06:59.275 { 00:06:59.275 "params": { 00:06:59.275 "block_size": 512, 00:06:59.275 "num_blocks": 1048576, 00:06:59.275 "name": "malloc0" 00:06:59.275 }, 00:06:59.275 "method": "bdev_malloc_create" 00:06:59.275 }, 00:06:59.275 { 00:06:59.275 "params": { 00:06:59.275 "block_size": 512, 00:06:59.275 "num_blocks": 1048576, 00:06:59.275 "name": "malloc1" 00:06:59.275 }, 00:06:59.275 "method": "bdev_malloc_create" 00:06:59.275 }, 00:06:59.275 { 00:06:59.275 "method": "bdev_wait_for_examine" 00:06:59.275 } 00:06:59.275 ] 00:06:59.275 } 00:06:59.275 ] 00:06:59.275 } 00:06:59.535 [2024-12-10 14:12:24.175834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.535 [2024-12-10 14:12:24.205067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.535 [2024-12-10 14:12:24.235730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.916  [2024-12-10T14:12:26.691Z] Copying: 217/512 [MB] (217 MBps) [2024-12-10T14:12:26.951Z] Copying: 427/512 [MB] (209 MBps) [2024-12-10T14:12:27.210Z] Copying: 512/512 [MB] (average 215 MBps) 00:07:02.373 00:07:02.373 14:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:02.373 14:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:02.373 14:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:02.373 14:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.373 [2024-12-10 14:12:27.178079] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:02.373 [2024-12-10 14:12:27.178181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:07:02.373 { 00:07:02.373 "subsystems": [ 00:07:02.373 { 00:07:02.373 "subsystem": "bdev", 00:07:02.373 "config": [ 00:07:02.373 { 00:07:02.373 "params": { 00:07:02.373 "block_size": 512, 00:07:02.373 "num_blocks": 1048576, 00:07:02.373 "name": "malloc0" 00:07:02.373 }, 00:07:02.373 "method": "bdev_malloc_create" 00:07:02.373 }, 00:07:02.373 { 00:07:02.373 "params": { 00:07:02.373 "block_size": 512, 00:07:02.373 "num_blocks": 1048576, 00:07:02.373 "name": "malloc1" 00:07:02.373 }, 00:07:02.373 "method": "bdev_malloc_create" 00:07:02.373 }, 00:07:02.373 { 00:07:02.373 "method": "bdev_wait_for_examine" 00:07:02.373 } 00:07:02.373 ] 00:07:02.373 } 00:07:02.373 ] 00:07:02.373 } 00:07:02.632 [2024-12-10 14:12:27.322211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.632 [2024-12-10 14:12:27.349113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.632 [2024-12-10 14:12:27.376641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.011  [2024-12-10T14:12:29.794Z] Copying: 199/512 [MB] (199 MBps) [2024-12-10T14:12:30.053Z] Copying: 419/512 [MB] (220 MBps) [2024-12-10T14:12:30.313Z] Copying: 512/512 [MB] (average 212 MBps) 00:07:05.476 00:07:05.476 00:07:05.476 real 0m6.286s 00:07:05.476 user 0m5.678s 00:07:05.476 sys 0m0.458s 00:07:05.476 14:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.476 14:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.476 ************************************ 00:07:05.476 END TEST dd_malloc_copy 00:07:05.476 ************************************ 00:07:05.736 00:07:05.736 real 0m6.527s 00:07:05.736 user 0m5.801s 00:07:05.736 sys 0m0.580s 00:07:05.736 14:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.736 14:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:05.736 ************************************ 00:07:05.736 END TEST spdk_dd_malloc 00:07:05.736 ************************************ 00:07:05.736 14:12:30 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:05.736 14:12:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:05.736 14:12:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.736 14:12:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.736 ************************************ 00:07:05.736 START TEST spdk_dd_bdev_to_bdev 00:07:05.736 ************************************ 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:05.736 * Looking for test storage... 00:07:05.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.736 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.737 --rc genhtml_branch_coverage=1 00:07:05.737 --rc genhtml_function_coverage=1 00:07:05.737 --rc genhtml_legend=1 00:07:05.737 --rc geninfo_all_blocks=1 00:07:05.737 --rc geninfo_unexecuted_blocks=1 00:07:05.737 00:07:05.737 ' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:05.737 ************************************ 00:07:05.737 START TEST dd_inflate_file 00:07:05.737 ************************************ 00:07:05.737 14:12:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:05.996 [2024-12-10 14:12:30.609545] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:05.996 [2024-12-10 14:12:30.609659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:07:05.996 [2024-12-10 14:12:30.756335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.996 [2024-12-10 14:12:30.793091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.996 [2024-12-10 14:12:30.825822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.256  [2024-12-10T14:12:31.093Z] Copying: 64/64 [MB] (average 1488 MBps) 00:07:06.256 00:07:06.256 00:07:06.256 real 0m0.487s 00:07:06.256 user 0m0.279s 00:07:06.256 sys 0m0.242s 00:07:06.256 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.256 ************************************ 00:07:06.256 END TEST dd_inflate_file 00:07:06.256 ************************************ 00:07:06.256 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:06.256 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.516 ************************************ 00:07:06.516 START TEST dd_copy_to_out_bdev 00:07:06.516 ************************************ 00:07:06.516 14:12:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:06.516 { 00:07:06.516 "subsystems": [ 00:07:06.516 { 00:07:06.516 "subsystem": "bdev", 00:07:06.516 "config": [ 00:07:06.516 { 00:07:06.516 "params": { 00:07:06.516 "trtype": "pcie", 00:07:06.516 "traddr": "0000:00:10.0", 00:07:06.516 "name": "Nvme0" 00:07:06.516 }, 00:07:06.516 "method": "bdev_nvme_attach_controller" 00:07:06.516 }, 00:07:06.516 { 00:07:06.516 "params": { 00:07:06.516 "trtype": "pcie", 00:07:06.516 "traddr": "0000:00:11.0", 00:07:06.516 "name": "Nvme1" 00:07:06.516 }, 00:07:06.516 "method": "bdev_nvme_attach_controller" 00:07:06.516 }, 00:07:06.516 { 00:07:06.516 "method": "bdev_wait_for_examine" 00:07:06.516 } 00:07:06.516 ] 00:07:06.516 } 00:07:06.516 ] 00:07:06.516 } 00:07:06.516 [2024-12-10 14:12:31.156584] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:06.517 [2024-12-10 14:12:31.156701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61829 ] 00:07:06.517 [2024-12-10 14:12:31.301798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.517 [2024-12-10 14:12:31.332180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.776 [2024-12-10 14:12:31.361022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.712  [2024-12-10T14:12:32.868Z] Copying: 52/64 [MB] (52 MBps) [2024-12-10T14:12:33.142Z] Copying: 64/64 [MB] (average 53 MBps) 00:07:08.305 00:07:08.305 00:07:08.305 real 0m1.787s 00:07:08.305 user 0m1.603s 00:07:08.305 sys 0m1.456s 00:07:08.305 ************************************ 00:07:08.305 END TEST dd_copy_to_out_bdev 00:07:08.305 ************************************ 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 ************************************ 00:07:08.305 START TEST dd_offset_magic 00:07:08.305 ************************************ 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:08.305 14:12:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 [2024-12-10 14:12:32.991917] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:08.305 [2024-12-10 14:12:32.992738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:07:08.305 { 00:07:08.305 "subsystems": [ 00:07:08.305 { 00:07:08.305 "subsystem": "bdev", 00:07:08.305 "config": [ 00:07:08.305 { 00:07:08.305 "params": { 00:07:08.305 "trtype": "pcie", 00:07:08.305 "traddr": "0000:00:10.0", 00:07:08.305 "name": "Nvme0" 00:07:08.305 }, 00:07:08.306 "method": "bdev_nvme_attach_controller" 00:07:08.306 }, 00:07:08.306 { 00:07:08.306 "params": { 00:07:08.306 "trtype": "pcie", 00:07:08.306 "traddr": "0000:00:11.0", 00:07:08.306 "name": "Nvme1" 00:07:08.306 }, 00:07:08.306 "method": "bdev_nvme_attach_controller" 00:07:08.306 }, 00:07:08.306 { 00:07:08.306 "method": "bdev_wait_for_examine" 00:07:08.306 } 00:07:08.306 ] 00:07:08.306 } 00:07:08.306 ] 00:07:08.306 } 00:07:08.564 [2024-12-10 14:12:33.141162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.564 [2024-12-10 14:12:33.176632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.564 [2024-12-10 14:12:33.209173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.822  [2024-12-10T14:12:33.660Z] Copying: 65/65 [MB] (average 970 MBps) 00:07:08.823 00:07:08.823 14:12:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:08.823 14:12:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:08.823 14:12:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:08.823 14:12:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:09.081 [2024-12-10 14:12:33.689114] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:09.081 [2024-12-10 14:12:33.689220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61888 ] 00:07:09.081 { 00:07:09.081 "subsystems": [ 00:07:09.081 { 00:07:09.081 "subsystem": "bdev", 00:07:09.081 "config": [ 00:07:09.081 { 00:07:09.081 "params": { 00:07:09.081 "trtype": "pcie", 00:07:09.081 "traddr": "0000:00:10.0", 00:07:09.081 "name": "Nvme0" 00:07:09.081 }, 00:07:09.081 "method": "bdev_nvme_attach_controller" 00:07:09.081 }, 00:07:09.081 { 00:07:09.081 "params": { 00:07:09.081 "trtype": "pcie", 00:07:09.081 "traddr": "0000:00:11.0", 00:07:09.081 "name": "Nvme1" 00:07:09.081 }, 00:07:09.081 "method": "bdev_nvme_attach_controller" 00:07:09.081 }, 00:07:09.081 { 00:07:09.081 "method": "bdev_wait_for_examine" 00:07:09.081 } 00:07:09.081 ] 00:07:09.081 } 00:07:09.081 ] 00:07:09.081 } 00:07:09.081 [2024-12-10 14:12:33.835984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.081 [2024-12-10 14:12:33.874553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.081 [2024-12-10 14:12:33.910051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.340  [2024-12-10T14:12:34.436Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:09.599 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:09.599 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 [2024-12-10 14:12:34.285144] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:09.599 [2024-12-10 14:12:34.285388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:07:09.599 { 00:07:09.599 "subsystems": [ 00:07:09.599 { 00:07:09.599 "subsystem": "bdev", 00:07:09.599 "config": [ 00:07:09.599 { 00:07:09.599 "params": { 00:07:09.599 "trtype": "pcie", 00:07:09.599 "traddr": "0000:00:10.0", 00:07:09.599 "name": "Nvme0" 00:07:09.599 }, 00:07:09.599 "method": "bdev_nvme_attach_controller" 00:07:09.599 }, 00:07:09.599 { 00:07:09.599 "params": { 00:07:09.599 "trtype": "pcie", 00:07:09.599 "traddr": "0000:00:11.0", 00:07:09.599 "name": "Nvme1" 00:07:09.599 }, 00:07:09.599 "method": "bdev_nvme_attach_controller" 00:07:09.599 }, 00:07:09.599 { 00:07:09.599 "method": "bdev_wait_for_examine" 00:07:09.599 } 00:07:09.599 ] 00:07:09.599 } 00:07:09.599 ] 00:07:09.599 } 00:07:09.599 [2024-12-10 14:12:34.429712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.858 [2024-12-10 14:12:34.467151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.858 [2024-12-10 14:12:34.501249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.116  [2024-12-10T14:12:34.953Z] Copying: 65/65 [MB] (average 1140 MBps) 00:07:10.116 00:07:10.116 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:10.116 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:10.116 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:10.116 14:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.375 [2024-12-10 14:12:34.979621] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:10.375 [2024-12-10 14:12:34.979716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61920 ] 00:07:10.375 { 00:07:10.375 "subsystems": [ 00:07:10.375 { 00:07:10.375 "subsystem": "bdev", 00:07:10.375 "config": [ 00:07:10.375 { 00:07:10.375 "params": { 00:07:10.375 "trtype": "pcie", 00:07:10.375 "traddr": "0000:00:10.0", 00:07:10.375 "name": "Nvme0" 00:07:10.375 }, 00:07:10.375 "method": "bdev_nvme_attach_controller" 00:07:10.375 }, 00:07:10.376 { 00:07:10.376 "params": { 00:07:10.376 "trtype": "pcie", 00:07:10.376 "traddr": "0000:00:11.0", 00:07:10.376 "name": "Nvme1" 00:07:10.376 }, 00:07:10.376 "method": "bdev_nvme_attach_controller" 00:07:10.376 }, 00:07:10.376 { 00:07:10.376 "method": "bdev_wait_for_examine" 00:07:10.376 } 00:07:10.376 ] 00:07:10.376 } 00:07:10.376 ] 00:07:10.376 } 00:07:10.376 [2024-12-10 14:12:35.124426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.376 [2024-12-10 14:12:35.161632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.376 [2024-12-10 14:12:35.197633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.634  [2024-12-10T14:12:35.730Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:10.893 00:07:10.893 ************************************ 00:07:10.893 END TEST dd_offset_magic 00:07:10.893 ************************************ 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:10.893 00:07:10.893 real 0m2.581s 00:07:10.893 user 0m1.907s 00:07:10.893 sys 0m0.676s 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:10.893 14:12:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:10.893 [2024-12-10 14:12:35.625382] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:10.893 [2024-12-10 14:12:35.625630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61951 ] 00:07:10.893 { 00:07:10.893 "subsystems": [ 00:07:10.893 { 00:07:10.893 "subsystem": "bdev", 00:07:10.893 "config": [ 00:07:10.893 { 00:07:10.893 "params": { 00:07:10.893 "trtype": "pcie", 00:07:10.893 "traddr": "0000:00:10.0", 00:07:10.893 "name": "Nvme0" 00:07:10.893 }, 00:07:10.893 "method": "bdev_nvme_attach_controller" 00:07:10.893 }, 00:07:10.893 { 00:07:10.893 "params": { 00:07:10.893 "trtype": "pcie", 00:07:10.893 "traddr": "0000:00:11.0", 00:07:10.893 "name": "Nvme1" 00:07:10.893 }, 00:07:10.893 "method": "bdev_nvme_attach_controller" 00:07:10.893 }, 00:07:10.893 { 00:07:10.893 "method": "bdev_wait_for_examine" 00:07:10.893 } 00:07:10.893 ] 00:07:10.893 } 00:07:10.893 ] 00:07:10.893 } 00:07:11.151 [2024-12-10 14:12:35.774931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.151 [2024-12-10 14:12:35.812266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.151 [2024-12-10 14:12:35.846345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.409  [2024-12-10T14:12:36.246Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:11.409 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:11.409 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:11.409 [2024-12-10 14:12:36.218594] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:11.409 [2024-12-10 14:12:36.218832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61972 ] 00:07:11.409 { 00:07:11.409 "subsystems": [ 00:07:11.409 { 00:07:11.409 "subsystem": "bdev", 00:07:11.409 "config": [ 00:07:11.409 { 00:07:11.409 "params": { 00:07:11.409 "trtype": "pcie", 00:07:11.409 "traddr": "0000:00:10.0", 00:07:11.409 "name": "Nvme0" 00:07:11.409 }, 00:07:11.409 "method": "bdev_nvme_attach_controller" 00:07:11.409 }, 00:07:11.409 { 00:07:11.409 "params": { 00:07:11.409 "trtype": "pcie", 00:07:11.409 "traddr": "0000:00:11.0", 00:07:11.409 "name": "Nvme1" 00:07:11.409 }, 00:07:11.409 "method": "bdev_nvme_attach_controller" 00:07:11.409 }, 00:07:11.409 { 00:07:11.409 "method": "bdev_wait_for_examine" 00:07:11.409 } 00:07:11.409 ] 00:07:11.409 } 00:07:11.409 ] 00:07:11.409 } 00:07:11.667 [2024-12-10 14:12:36.364897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.667 [2024-12-10 14:12:36.401558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.667 [2024-12-10 14:12:36.435214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.926  [2024-12-10T14:12:36.763Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:11.926 00:07:11.926 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:12.186 ************************************ 00:07:12.186 END TEST spdk_dd_bdev_to_bdev 00:07:12.186 ************************************ 00:07:12.186 00:07:12.186 real 0m6.414s 00:07:12.186 user 0m4.839s 00:07:12.186 sys 0m2.939s 00:07:12.186 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.186 14:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:12.186 14:12:36 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:12.186 14:12:36 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:12.186 14:12:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.186 14:12:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.186 14:12:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.186 ************************************ 00:07:12.186 START TEST spdk_dd_uring 00:07:12.186 ************************************ 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:12.186 * Looking for test storage... 00:07:12.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.186 14:12:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.186 --rc genhtml_branch_coverage=1 00:07:12.186 --rc genhtml_function_coverage=1 00:07:12.186 --rc genhtml_legend=1 00:07:12.186 --rc geninfo_all_blocks=1 00:07:12.186 --rc geninfo_unexecuted_blocks=1 00:07:12.186 00:07:12.186 ' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.186 --rc genhtml_branch_coverage=1 00:07:12.186 --rc genhtml_function_coverage=1 00:07:12.186 --rc genhtml_legend=1 00:07:12.186 --rc geninfo_all_blocks=1 00:07:12.186 --rc geninfo_unexecuted_blocks=1 00:07:12.186 00:07:12.186 ' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.186 --rc genhtml_branch_coverage=1 00:07:12.186 --rc genhtml_function_coverage=1 00:07:12.186 --rc genhtml_legend=1 00:07:12.186 --rc geninfo_all_blocks=1 00:07:12.186 --rc geninfo_unexecuted_blocks=1 00:07:12.186 00:07:12.186 ' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.186 --rc genhtml_branch_coverage=1 00:07:12.186 --rc genhtml_function_coverage=1 00:07:12.186 --rc genhtml_legend=1 00:07:12.186 --rc geninfo_all_blocks=1 00:07:12.186 --rc geninfo_unexecuted_blocks=1 00:07:12.186 00:07:12.186 ' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.186 14:12:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:12.186 ************************************ 00:07:12.186 START TEST dd_uring_copy 00:07:12.186 ************************************ 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=ekhu43kuhpy6e4epylg6om6mvs327az73f0jyx6zrolnojc7twwtysz6egatf3rzqcfh7oq0c5y9ju9nj8uv36v5tni4h67i91uqxozdhipy1xooj5cy2qxhyh1oufou4pmyysihmfxrk8rxvmb8zlh1fjl9efbz9cx0exlbrj1eow2ef342mkzp5h1bwixtrhn0d4hgrc16uuwx4oic4jx48d6vh0nbze06e6xvrthjiidzrs8thm4qjy28mue06k88kwrsbpamfx83k5qwyihc1cw9n18jg5v7etvbn0nz8bvx5e5axq5jco5s91rm9bgd006iop0ifivt4mysdjgp31fago3ztuibdrwoq2gd2bznfxlq95zl5j5aak4j6f8h4hfxmn0csw86lana2uno5lq775i8dzas1thlaqsnm179ah73do1tlubm9s9j82emb5vtf0ty27zpp5fmdyuo1m5gpk7fgt6rha2ds7kftsbkg9wjhe3edqna8gogj7y4tahf748bhq4snzn5qwwat4hrfit0xbz0octfkq40tyac8zlzu8esqj34mtxzldxudeyrvaunppbsonp1079qe0146r39k3il9gmxng4wj3qxiflbk9h4vuy0b1ypam9eso0dpz1v66nhlnsfpefztbu9au3sxp2emk4te0t5oa2g4f2c7nq6yjod8tk6m4fn5pxxym4j1kf3ieerd6sbv0c7hj1zhevcsvldo076lmdyh4rqgts3dl9l9yc90x67hr2p1o8tr9ks1es24j0kff495xaa8tmva3ni3pdspth3veglcno7hlngny3gxtl9g87t4n0hnafr11tnsmk566c3elm5x728gs15z99mjrh101pa9r92x42bf65xvdnjp6ywj92lwf0dujj5ufa986xya7j5alx688tb95dpf1dsluxoj8y59auh63an86f1jdu9wrlydi52udn6i4tmzfc5l2x85j0pue8d2vdic1z3y2dbandgvrrq1ms5 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo ekhu43kuhpy6e4epylg6om6mvs327az73f0jyx6zrolnojc7twwtysz6egatf3rzqcfh7oq0c5y9ju9nj8uv36v5tni4h67i91uqxozdhipy1xooj5cy2qxhyh1oufou4pmyysihmfxrk8rxvmb8zlh1fjl9efbz9cx0exlbrj1eow2ef342mkzp5h1bwixtrhn0d4hgrc16uuwx4oic4jx48d6vh0nbze06e6xvrthjiidzrs8thm4qjy28mue06k88kwrsbpamfx83k5qwyihc1cw9n18jg5v7etvbn0nz8bvx5e5axq5jco5s91rm9bgd006iop0ifivt4mysdjgp31fago3ztuibdrwoq2gd2bznfxlq95zl5j5aak4j6f8h4hfxmn0csw86lana2uno5lq775i8dzas1thlaqsnm179ah73do1tlubm9s9j82emb5vtf0ty27zpp5fmdyuo1m5gpk7fgt6rha2ds7kftsbkg9wjhe3edqna8gogj7y4tahf748bhq4snzn5qwwat4hrfit0xbz0octfkq40tyac8zlzu8esqj34mtxzldxudeyrvaunppbsonp1079qe0146r39k3il9gmxng4wj3qxiflbk9h4vuy0b1ypam9eso0dpz1v66nhlnsfpefztbu9au3sxp2emk4te0t5oa2g4f2c7nq6yjod8tk6m4fn5pxxym4j1kf3ieerd6sbv0c7hj1zhevcsvldo076lmdyh4rqgts3dl9l9yc90x67hr2p1o8tr9ks1es24j0kff495xaa8tmva3ni3pdspth3veglcno7hlngny3gxtl9g87t4n0hnafr11tnsmk566c3elm5x728gs15z99mjrh101pa9r92x42bf65xvdnjp6ywj92lwf0dujj5ufa986xya7j5alx688tb95dpf1dsluxoj8y59auh63an86f1jdu9wrlydi52udn6i4tmzfc5l2x85j0pue8d2vdic1z3y2dbandgvrrq1ms5 00:07:12.446 14:12:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:12.446 [2024-12-10 14:12:37.102204] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:12.446 [2024-12-10 14:12:37.102305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62048 ] 00:07:12.446 [2024-12-10 14:12:37.248340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.706 [2024-12-10 14:12:37.287017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.706 [2024-12-10 14:12:37.320068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.274  [2024-12-10T14:12:38.111Z] Copying: 511/511 [MB] (average 1312 MBps) 00:07:13.274 00:07:13.534 14:12:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:13.534 14:12:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:13.534 14:12:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.534 14:12:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.534 [2024-12-10 14:12:38.167705] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:13.534 [2024-12-10 14:12:38.167777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:07:13.534 { 00:07:13.534 "subsystems": [ 00:07:13.534 { 00:07:13.534 "subsystem": "bdev", 00:07:13.534 "config": [ 00:07:13.534 { 00:07:13.534 "params": { 00:07:13.534 "block_size": 512, 00:07:13.534 "num_blocks": 1048576, 00:07:13.534 "name": "malloc0" 00:07:13.534 }, 00:07:13.534 "method": "bdev_malloc_create" 00:07:13.534 }, 00:07:13.534 { 00:07:13.534 "params": { 00:07:13.534 "filename": "/dev/zram1", 00:07:13.534 "name": "uring0" 00:07:13.534 }, 00:07:13.534 "method": "bdev_uring_create" 00:07:13.534 }, 00:07:13.534 { 00:07:13.534 "method": "bdev_wait_for_examine" 00:07:13.534 } 00:07:13.534 ] 00:07:13.534 } 00:07:13.534 ] 00:07:13.534 } 00:07:13.534 [2024-12-10 14:12:38.315005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.534 [2024-12-10 14:12:38.350843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.793 [2024-12-10 14:12:38.384545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.731  [2024-12-10T14:12:40.946Z] Copying: 191/512 [MB] (191 MBps) [2024-12-10T14:12:41.205Z] Copying: 387/512 [MB] (195 MBps) [2024-12-10T14:12:41.464Z] Copying: 512/512 [MB] (average 193 MBps) 00:07:16.627 00:07:16.627 14:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:16.627 14:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:16.627 14:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.627 14:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.886 [2024-12-10 14:12:41.463202] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:16.886 [2024-12-10 14:12:41.463321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62115 ] 00:07:16.886 { 00:07:16.886 "subsystems": [ 00:07:16.886 { 00:07:16.886 "subsystem": "bdev", 00:07:16.886 "config": [ 00:07:16.886 { 00:07:16.886 "params": { 00:07:16.886 "block_size": 512, 00:07:16.886 "num_blocks": 1048576, 00:07:16.886 "name": "malloc0" 00:07:16.886 }, 00:07:16.886 "method": "bdev_malloc_create" 00:07:16.886 }, 00:07:16.886 { 00:07:16.886 "params": { 00:07:16.886 "filename": "/dev/zram1", 00:07:16.886 "name": "uring0" 00:07:16.886 }, 00:07:16.886 "method": "bdev_uring_create" 00:07:16.886 }, 00:07:16.886 { 00:07:16.886 "method": "bdev_wait_for_examine" 00:07:16.886 } 00:07:16.886 ] 00:07:16.886 } 00:07:16.886 ] 00:07:16.886 } 00:07:16.886 [2024-12-10 14:12:41.610754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.886 [2024-12-10 14:12:41.645882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.887 [2024-12-10 14:12:41.678699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.267  [2024-12-10T14:12:44.042Z] Copying: 162/512 [MB] (162 MBps) [2024-12-10T14:12:44.979Z] Copying: 336/512 [MB] (173 MBps) [2024-12-10T14:12:44.979Z] Copying: 512/512 [MB] (average 174 MBps) 00:07:20.142 00:07:20.142 14:12:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:20.142 14:12:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ ekhu43kuhpy6e4epylg6om6mvs327az73f0jyx6zrolnojc7twwtysz6egatf3rzqcfh7oq0c5y9ju9nj8uv36v5tni4h67i91uqxozdhipy1xooj5cy2qxhyh1oufou4pmyysihmfxrk8rxvmb8zlh1fjl9efbz9cx0exlbrj1eow2ef342mkzp5h1bwixtrhn0d4hgrc16uuwx4oic4jx48d6vh0nbze06e6xvrthjiidzrs8thm4qjy28mue06k88kwrsbpamfx83k5qwyihc1cw9n18jg5v7etvbn0nz8bvx5e5axq5jco5s91rm9bgd006iop0ifivt4mysdjgp31fago3ztuibdrwoq2gd2bznfxlq95zl5j5aak4j6f8h4hfxmn0csw86lana2uno5lq775i8dzas1thlaqsnm179ah73do1tlubm9s9j82emb5vtf0ty27zpp5fmdyuo1m5gpk7fgt6rha2ds7kftsbkg9wjhe3edqna8gogj7y4tahf748bhq4snzn5qwwat4hrfit0xbz0octfkq40tyac8zlzu8esqj34mtxzldxudeyrvaunppbsonp1079qe0146r39k3il9gmxng4wj3qxiflbk9h4vuy0b1ypam9eso0dpz1v66nhlnsfpefztbu9au3sxp2emk4te0t5oa2g4f2c7nq6yjod8tk6m4fn5pxxym4j1kf3ieerd6sbv0c7hj1zhevcsvldo076lmdyh4rqgts3dl9l9yc90x67hr2p1o8tr9ks1es24j0kff495xaa8tmva3ni3pdspth3veglcno7hlngny3gxtl9g87t4n0hnafr11tnsmk566c3elm5x728gs15z99mjrh101pa9r92x42bf65xvdnjp6ywj92lwf0dujj5ufa986xya7j5alx688tb95dpf1dsluxoj8y59auh63an86f1jdu9wrlydi52udn6i4tmzfc5l2x85j0pue8d2vdic1z3y2dbandgvrrq1ms5 == \e\k\h\u\4\3\k\u\h\p\y\6\e\4\e\p\y\l\g\6\o\m\6\m\v\s\3\2\7\a\z\7\3\f\0\j\y\x\6\z\r\o\l\n\o\j\c\7\t\w\w\t\y\s\z\6\e\g\a\t\f\3\r\z\q\c\f\h\7\o\q\0\c\5\y\9\j\u\9\n\j\8\u\v\3\6\v\5\t\n\i\4\h\6\7\i\9\1\u\q\x\o\z\d\h\i\p\y\1\x\o\o\j\5\c\y\2\q\x\h\y\h\1\o\u\f\o\u\4\p\m\y\y\s\i\h\m\f\x\r\k\8\r\x\v\m\b\8\z\l\h\1\f\j\l\9\e\f\b\z\9\c\x\0\e\x\l\b\r\j\1\e\o\w\2\e\f\3\4\2\m\k\z\p\5\h\1\b\w\i\x\t\r\h\n\0\d\4\h\g\r\c\1\6\u\u\w\x\4\o\i\c\4\j\x\4\8\d\6\v\h\0\n\b\z\e\0\6\e\6\x\v\r\t\h\j\i\i\d\z\r\s\8\t\h\m\4\q\j\y\2\8\m\u\e\0\6\k\8\8\k\w\r\s\b\p\a\m\f\x\8\3\k\5\q\w\y\i\h\c\1\c\w\9\n\1\8\j\g\5\v\7\e\t\v\b\n\0\n\z\8\b\v\x\5\e\5\a\x\q\5\j\c\o\5\s\9\1\r\m\9\b\g\d\0\0\6\i\o\p\0\i\f\i\v\t\4\m\y\s\d\j\g\p\3\1\f\a\g\o\3\z\t\u\i\b\d\r\w\o\q\2\g\d\2\b\z\n\f\x\l\q\9\5\z\l\5\j\5\a\a\k\4\j\6\f\8\h\4\h\f\x\m\n\0\c\s\w\8\6\l\a\n\a\2\u\n\o\5\l\q\7\7\5\i\8\d\z\a\s\1\t\h\l\a\q\s\n\m\1\7\9\a\h\7\3\d\o\1\t\l\u\b\m\9\s\9\j\8\2\e\m\b\5\v\t\f\0\t\y\2\7\z\p\p\5\f\m\d\y\u\o\1\m\5\g\p\k\7\f\g\t\6\r\h\a\2\d\s\7\k\f\t\s\b\k\g\9\w\j\h\e\3\e\d\q\n\a\8\g\o\g\j\7\y\4\t\a\h\f\7\4\8\b\h\q\4\s\n\z\n\5\q\w\w\a\t\4\h\r\f\i\t\0\x\b\z\0\o\c\t\f\k\q\4\0\t\y\a\c\8\z\l\z\u\8\e\s\q\j\3\4\m\t\x\z\l\d\x\u\d\e\y\r\v\a\u\n\p\p\b\s\o\n\p\1\0\7\9\q\e\0\1\4\6\r\3\9\k\3\i\l\9\g\m\x\n\g\4\w\j\3\q\x\i\f\l\b\k\9\h\4\v\u\y\0\b\1\y\p\a\m\9\e\s\o\0\d\p\z\1\v\6\6\n\h\l\n\s\f\p\e\f\z\t\b\u\9\a\u\3\s\x\p\2\e\m\k\4\t\e\0\t\5\o\a\2\g\4\f\2\c\7\n\q\6\y\j\o\d\8\t\k\6\m\4\f\n\5\p\x\x\y\m\4\j\1\k\f\3\i\e\e\r\d\6\s\b\v\0\c\7\h\j\1\z\h\e\v\c\s\v\l\d\o\0\7\6\l\m\d\y\h\4\r\q\g\t\s\3\d\l\9\l\9\y\c\9\0\x\6\7\h\r\2\p\1\o\8\t\r\9\k\s\1\e\s\2\4\j\0\k\f\f\4\9\5\x\a\a\8\t\m\v\a\3\n\i\3\p\d\s\p\t\h\3\v\e\g\l\c\n\o\7\h\l\n\g\n\y\3\g\x\t\l\9\g\8\7\t\4\n\0\h\n\a\f\r\1\1\t\n\s\m\k\5\6\6\c\3\e\l\m\5\x\7\2\8\g\s\1\5\z\9\9\m\j\r\h\1\0\1\p\a\9\r\9\2\x\4\2\b\f\6\5\x\v\d\n\j\p\6\y\w\j\9\2\l\w\f\0\d\u\j\j\5\u\f\a\9\8\6\x\y\a\7\j\5\a\l\x\6\8\8\t\b\9\5\d\p\f\1\d\s\l\u\x\o\j\8\y\5\9\a\u\h\6\3\a\n\8\6\f\1\j\d\u\9\w\r\l\y\d\i\5\2\u\d\n\6\i\4\t\m\z\f\c\5\l\2\x\8\5\j\0\p\u\e\8\d\2\v\d\i\c\1\z\3\y\2\d\b\a\n\d\g\v\r\r\q\1\m\s\5 ]] 00:07:20.142 14:12:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:20.142 14:12:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ ekhu43kuhpy6e4epylg6om6mvs327az73f0jyx6zrolnojc7twwtysz6egatf3rzqcfh7oq0c5y9ju9nj8uv36v5tni4h67i91uqxozdhipy1xooj5cy2qxhyh1oufou4pmyysihmfxrk8rxvmb8zlh1fjl9efbz9cx0exlbrj1eow2ef342mkzp5h1bwixtrhn0d4hgrc16uuwx4oic4jx48d6vh0nbze06e6xvrthjiidzrs8thm4qjy28mue06k88kwrsbpamfx83k5qwyihc1cw9n18jg5v7etvbn0nz8bvx5e5axq5jco5s91rm9bgd006iop0ifivt4mysdjgp31fago3ztuibdrwoq2gd2bznfxlq95zl5j5aak4j6f8h4hfxmn0csw86lana2uno5lq775i8dzas1thlaqsnm179ah73do1tlubm9s9j82emb5vtf0ty27zpp5fmdyuo1m5gpk7fgt6rha2ds7kftsbkg9wjhe3edqna8gogj7y4tahf748bhq4snzn5qwwat4hrfit0xbz0octfkq40tyac8zlzu8esqj34mtxzldxudeyrvaunppbsonp1079qe0146r39k3il9gmxng4wj3qxiflbk9h4vuy0b1ypam9eso0dpz1v66nhlnsfpefztbu9au3sxp2emk4te0t5oa2g4f2c7nq6yjod8tk6m4fn5pxxym4j1kf3ieerd6sbv0c7hj1zhevcsvldo076lmdyh4rqgts3dl9l9yc90x67hr2p1o8tr9ks1es24j0kff495xaa8tmva3ni3pdspth3veglcno7hlngny3gxtl9g87t4n0hnafr11tnsmk566c3elm5x728gs15z99mjrh101pa9r92x42bf65xvdnjp6ywj92lwf0dujj5ufa986xya7j5alx688tb95dpf1dsluxoj8y59auh63an86f1jdu9wrlydi52udn6i4tmzfc5l2x85j0pue8d2vdic1z3y2dbandgvrrq1ms5 == \e\k\h\u\4\3\k\u\h\p\y\6\e\4\e\p\y\l\g\6\o\m\6\m\v\s\3\2\7\a\z\7\3\f\0\j\y\x\6\z\r\o\l\n\o\j\c\7\t\w\w\t\y\s\z\6\e\g\a\t\f\3\r\z\q\c\f\h\7\o\q\0\c\5\y\9\j\u\9\n\j\8\u\v\3\6\v\5\t\n\i\4\h\6\7\i\9\1\u\q\x\o\z\d\h\i\p\y\1\x\o\o\j\5\c\y\2\q\x\h\y\h\1\o\u\f\o\u\4\p\m\y\y\s\i\h\m\f\x\r\k\8\r\x\v\m\b\8\z\l\h\1\f\j\l\9\e\f\b\z\9\c\x\0\e\x\l\b\r\j\1\e\o\w\2\e\f\3\4\2\m\k\z\p\5\h\1\b\w\i\x\t\r\h\n\0\d\4\h\g\r\c\1\6\u\u\w\x\4\o\i\c\4\j\x\4\8\d\6\v\h\0\n\b\z\e\0\6\e\6\x\v\r\t\h\j\i\i\d\z\r\s\8\t\h\m\4\q\j\y\2\8\m\u\e\0\6\k\8\8\k\w\r\s\b\p\a\m\f\x\8\3\k\5\q\w\y\i\h\c\1\c\w\9\n\1\8\j\g\5\v\7\e\t\v\b\n\0\n\z\8\b\v\x\5\e\5\a\x\q\5\j\c\o\5\s\9\1\r\m\9\b\g\d\0\0\6\i\o\p\0\i\f\i\v\t\4\m\y\s\d\j\g\p\3\1\f\a\g\o\3\z\t\u\i\b\d\r\w\o\q\2\g\d\2\b\z\n\f\x\l\q\9\5\z\l\5\j\5\a\a\k\4\j\6\f\8\h\4\h\f\x\m\n\0\c\s\w\8\6\l\a\n\a\2\u\n\o\5\l\q\7\7\5\i\8\d\z\a\s\1\t\h\l\a\q\s\n\m\1\7\9\a\h\7\3\d\o\1\t\l\u\b\m\9\s\9\j\8\2\e\m\b\5\v\t\f\0\t\y\2\7\z\p\p\5\f\m\d\y\u\o\1\m\5\g\p\k\7\f\g\t\6\r\h\a\2\d\s\7\k\f\t\s\b\k\g\9\w\j\h\e\3\e\d\q\n\a\8\g\o\g\j\7\y\4\t\a\h\f\7\4\8\b\h\q\4\s\n\z\n\5\q\w\w\a\t\4\h\r\f\i\t\0\x\b\z\0\o\c\t\f\k\q\4\0\t\y\a\c\8\z\l\z\u\8\e\s\q\j\3\4\m\t\x\z\l\d\x\u\d\e\y\r\v\a\u\n\p\p\b\s\o\n\p\1\0\7\9\q\e\0\1\4\6\r\3\9\k\3\i\l\9\g\m\x\n\g\4\w\j\3\q\x\i\f\l\b\k\9\h\4\v\u\y\0\b\1\y\p\a\m\9\e\s\o\0\d\p\z\1\v\6\6\n\h\l\n\s\f\p\e\f\z\t\b\u\9\a\u\3\s\x\p\2\e\m\k\4\t\e\0\t\5\o\a\2\g\4\f\2\c\7\n\q\6\y\j\o\d\8\t\k\6\m\4\f\n\5\p\x\x\y\m\4\j\1\k\f\3\i\e\e\r\d\6\s\b\v\0\c\7\h\j\1\z\h\e\v\c\s\v\l\d\o\0\7\6\l\m\d\y\h\4\r\q\g\t\s\3\d\l\9\l\9\y\c\9\0\x\6\7\h\r\2\p\1\o\8\t\r\9\k\s\1\e\s\2\4\j\0\k\f\f\4\9\5\x\a\a\8\t\m\v\a\3\n\i\3\p\d\s\p\t\h\3\v\e\g\l\c\n\o\7\h\l\n\g\n\y\3\g\x\t\l\9\g\8\7\t\4\n\0\h\n\a\f\r\1\1\t\n\s\m\k\5\6\6\c\3\e\l\m\5\x\7\2\8\g\s\1\5\z\9\9\m\j\r\h\1\0\1\p\a\9\r\9\2\x\4\2\b\f\6\5\x\v\d\n\j\p\6\y\w\j\9\2\l\w\f\0\d\u\j\j\5\u\f\a\9\8\6\x\y\a\7\j\5\a\l\x\6\8\8\t\b\9\5\d\p\f\1\d\s\l\u\x\o\j\8\y\5\9\a\u\h\6\3\a\n\8\6\f\1\j\d\u\9\w\r\l\y\d\i\5\2\u\d\n\6\i\4\t\m\z\f\c\5\l\2\x\8\5\j\0\p\u\e\8\d\2\v\d\i\c\1\z\3\y\2\d\b\a\n\d\g\v\r\r\q\1\m\s\5 ]] 00:07:20.143 14:12:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:20.710 14:12:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:20.710 14:12:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:20.710 14:12:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:20.710 14:12:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.710 [2024-12-10 14:12:45.319194] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:20.710 [2024-12-10 14:12:45.319282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62172 ] 00:07:20.710 { 00:07:20.710 "subsystems": [ 00:07:20.710 { 00:07:20.710 "subsystem": "bdev", 00:07:20.710 "config": [ 00:07:20.710 { 00:07:20.710 "params": { 00:07:20.710 "block_size": 512, 00:07:20.710 "num_blocks": 1048576, 00:07:20.710 "name": "malloc0" 00:07:20.710 }, 00:07:20.710 "method": "bdev_malloc_create" 00:07:20.710 }, 00:07:20.710 { 00:07:20.710 "params": { 00:07:20.710 "filename": "/dev/zram1", 00:07:20.710 "name": "uring0" 00:07:20.710 }, 00:07:20.710 "method": "bdev_uring_create" 00:07:20.710 }, 00:07:20.710 { 00:07:20.710 "method": "bdev_wait_for_examine" 00:07:20.710 } 00:07:20.710 ] 00:07:20.710 } 00:07:20.710 ] 00:07:20.710 } 00:07:20.710 [2024-12-10 14:12:45.461054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.710 [2024-12-10 14:12:45.491348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.710 [2024-12-10 14:12:45.518345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.089  [2024-12-10T14:12:47.863Z] Copying: 182/512 [MB] (182 MBps) [2024-12-10T14:12:48.800Z] Copying: 364/512 [MB] (182 MBps) [2024-12-10T14:12:48.800Z] Copying: 512/512 [MB] (average 181 MBps) 00:07:23.963 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:23.963 14:12:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.963 [2024-12-10 14:12:48.732058] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:23.963 [2024-12-10 14:12:48.732170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:07:23.963 { 00:07:23.963 "subsystems": [ 00:07:23.963 { 00:07:23.963 "subsystem": "bdev", 00:07:23.963 "config": [ 00:07:23.963 { 00:07:23.963 "params": { 00:07:23.963 "block_size": 512, 00:07:23.963 "num_blocks": 1048576, 00:07:23.963 "name": "malloc0" 00:07:23.963 }, 00:07:23.963 "method": "bdev_malloc_create" 00:07:23.963 }, 00:07:23.963 { 00:07:23.963 "params": { 00:07:23.963 "filename": "/dev/zram1", 00:07:23.963 "name": "uring0" 00:07:23.963 }, 00:07:23.963 "method": "bdev_uring_create" 00:07:23.963 }, 00:07:23.963 { 00:07:23.963 "params": { 00:07:23.963 "name": "uring0" 00:07:23.963 }, 00:07:23.963 "method": "bdev_uring_delete" 00:07:23.963 }, 00:07:23.963 { 00:07:23.963 "method": "bdev_wait_for_examine" 00:07:23.963 } 00:07:23.963 ] 00:07:23.963 } 00:07:23.963 ] 00:07:23.963 } 00:07:24.223 [2024-12-10 14:12:48.874815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.223 [2024-12-10 14:12:48.902270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.223 [2024-12-10 14:12:48.929941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.223  [2024-12-10T14:12:49.319Z] Copying: 0/0 [B] (average 0 Bps) 00:07:24.482 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.482 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:24.741 [2024-12-10 14:12:49.355787] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:24.741 [2024-12-10 14:12:49.355898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62249 ] 00:07:24.741 { 00:07:24.741 "subsystems": [ 00:07:24.741 { 00:07:24.741 "subsystem": "bdev", 00:07:24.741 "config": [ 00:07:24.741 { 00:07:24.741 "params": { 00:07:24.741 "block_size": 512, 00:07:24.741 "num_blocks": 1048576, 00:07:24.741 "name": "malloc0" 00:07:24.741 }, 00:07:24.741 "method": "bdev_malloc_create" 00:07:24.741 }, 00:07:24.741 { 00:07:24.741 "params": { 00:07:24.741 "filename": "/dev/zram1", 00:07:24.741 "name": "uring0" 00:07:24.741 }, 00:07:24.741 "method": "bdev_uring_create" 00:07:24.741 }, 00:07:24.741 { 00:07:24.741 "params": { 00:07:24.741 "name": "uring0" 00:07:24.741 }, 00:07:24.741 "method": "bdev_uring_delete" 00:07:24.741 }, 00:07:24.741 { 00:07:24.741 "method": "bdev_wait_for_examine" 00:07:24.741 } 00:07:24.741 ] 00:07:24.741 } 00:07:24.741 ] 00:07:24.741 } 00:07:24.741 [2024-12-10 14:12:49.506805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.741 [2024-12-10 14:12:49.533777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.741 [2024-12-10 14:12:49.560706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.000 [2024-12-10 14:12:49.676825] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:25.000 [2024-12-10 14:12:49.676899] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:25.000 [2024-12-10 14:12:49.676925] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:07:25.000 [2024-12-10 14:12:49.676935] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.259 [2024-12-10 14:12:49.846039] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:25.259 14:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:25.518 00:07:25.518 real 0m13.124s 00:07:25.518 user 0m8.799s 00:07:25.518 sys 0m12.078s 00:07:25.518 14:12:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.518 ************************************ 00:07:25.518 END TEST dd_uring_copy 00:07:25.518 ************************************ 00:07:25.518 14:12:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.518 ************************************ 00:07:25.518 END TEST spdk_dd_uring 00:07:25.518 ************************************ 00:07:25.518 00:07:25.518 real 0m13.362s 00:07:25.518 user 0m8.937s 00:07:25.518 sys 0m12.184s 00:07:25.518 14:12:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.518 14:12:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:25.518 14:12:50 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:25.518 14:12:50 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.518 14:12:50 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.518 14:12:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.518 ************************************ 00:07:25.518 START TEST spdk_dd_sparse 00:07:25.518 ************************************ 00:07:25.518 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:25.518 * Looking for test storage... 00:07:25.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.518 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.518 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.518 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.778 --rc genhtml_branch_coverage=1 00:07:25.778 --rc genhtml_function_coverage=1 00:07:25.778 --rc genhtml_legend=1 00:07:25.778 --rc geninfo_all_blocks=1 00:07:25.778 --rc geninfo_unexecuted_blocks=1 00:07:25.778 00:07:25.778 ' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.778 --rc genhtml_branch_coverage=1 00:07:25.778 --rc genhtml_function_coverage=1 00:07:25.778 --rc genhtml_legend=1 00:07:25.778 --rc geninfo_all_blocks=1 00:07:25.778 --rc geninfo_unexecuted_blocks=1 00:07:25.778 00:07:25.778 ' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.778 --rc genhtml_branch_coverage=1 00:07:25.778 --rc genhtml_function_coverage=1 00:07:25.778 --rc genhtml_legend=1 00:07:25.778 --rc geninfo_all_blocks=1 00:07:25.778 --rc geninfo_unexecuted_blocks=1 00:07:25.778 00:07:25.778 ' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.778 --rc genhtml_branch_coverage=1 00:07:25.778 --rc genhtml_function_coverage=1 00:07:25.778 --rc genhtml_legend=1 00:07:25.778 --rc geninfo_all_blocks=1 00:07:25.778 --rc geninfo_unexecuted_blocks=1 00:07:25.778 00:07:25.778 ' 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:25.778 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:25.779 1+0 records in 00:07:25.779 1+0 records out 00:07:25.779 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00408182 s, 1.0 GB/s 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:25.779 1+0 records in 00:07:25.779 1+0 records out 00:07:25.779 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00575424 s, 729 MB/s 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:25.779 1+0 records in 00:07:25.779 1+0 records out 00:07:25.779 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00663683 s, 632 MB/s 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:25.779 ************************************ 00:07:25.779 START TEST dd_sparse_file_to_file 00:07:25.779 ************************************ 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:25.779 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:25.779 [2024-12-10 14:12:50.528304] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:25.779 [2024-12-10 14:12:50.528412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62343 ] 00:07:25.779 { 00:07:25.779 "subsystems": [ 00:07:25.779 { 00:07:25.779 "subsystem": "bdev", 00:07:25.779 "config": [ 00:07:25.779 { 00:07:25.779 "params": { 00:07:25.779 "block_size": 4096, 00:07:25.779 "filename": "dd_sparse_aio_disk", 00:07:25.779 "name": "dd_aio" 00:07:25.779 }, 00:07:25.779 "method": "bdev_aio_create" 00:07:25.779 }, 00:07:25.779 { 00:07:25.779 "params": { 00:07:25.779 "lvs_name": "dd_lvstore", 00:07:25.779 "bdev_name": "dd_aio" 00:07:25.779 }, 00:07:25.779 "method": "bdev_lvol_create_lvstore" 00:07:25.779 }, 00:07:25.779 { 00:07:25.779 "method": "bdev_wait_for_examine" 00:07:25.779 } 00:07:25.779 ] 00:07:25.779 } 00:07:25.779 ] 00:07:25.779 } 00:07:26.038 [2024-12-10 14:12:50.674789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.038 [2024-12-10 14:12:50.704611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.038 [2024-12-10 14:12:50.736688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.038  [2024-12-10T14:12:51.135Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:26.298 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:26.298 14:12:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:26.298 00:07:26.298 real 0m0.533s 00:07:26.298 user 0m0.310s 00:07:26.298 sys 0m0.271s 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:26.298 ************************************ 00:07:26.298 END TEST dd_sparse_file_to_file 00:07:26.298 ************************************ 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:26.298 ************************************ 00:07:26.298 START TEST dd_sparse_file_to_bdev 00:07:26.298 ************************************ 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:26.298 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.299 [2024-12-10 14:12:51.111808] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:26.299 [2024-12-10 14:12:51.111905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62390 ] 00:07:26.299 { 00:07:26.299 "subsystems": [ 00:07:26.299 { 00:07:26.299 "subsystem": "bdev", 00:07:26.299 "config": [ 00:07:26.299 { 00:07:26.299 "params": { 00:07:26.299 "block_size": 4096, 00:07:26.299 "filename": "dd_sparse_aio_disk", 00:07:26.299 "name": "dd_aio" 00:07:26.299 }, 00:07:26.299 "method": "bdev_aio_create" 00:07:26.299 }, 00:07:26.299 { 00:07:26.299 "params": { 00:07:26.299 "lvs_name": "dd_lvstore", 00:07:26.299 "lvol_name": "dd_lvol", 00:07:26.299 "size_in_mib": 36, 00:07:26.299 "thin_provision": true 00:07:26.299 }, 00:07:26.299 "method": "bdev_lvol_create" 00:07:26.299 }, 00:07:26.299 { 00:07:26.299 "method": "bdev_wait_for_examine" 00:07:26.299 } 00:07:26.299 ] 00:07:26.299 } 00:07:26.299 ] 00:07:26.299 } 00:07:26.558 [2024-12-10 14:12:51.256722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.558 [2024-12-10 14:12:51.286240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.558 [2024-12-10 14:12:51.315249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.558  [2024-12-10T14:12:51.654Z] Copying: 12/36 [MB] (average 461 MBps) 00:07:26.817 00:07:26.817 00:07:26.817 real 0m0.461s 00:07:26.817 user 0m0.289s 00:07:26.817 sys 0m0.228s 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 ************************************ 00:07:26.817 END TEST dd_sparse_file_to_bdev 00:07:26.817 ************************************ 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 ************************************ 00:07:26.817 START TEST dd_sparse_bdev_to_file 00:07:26.817 ************************************ 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:26.817 14:12:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 [2024-12-10 14:12:51.625843] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:26.817 [2024-12-10 14:12:51.625955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62418 ] 00:07:26.817 { 00:07:26.817 "subsystems": [ 00:07:26.817 { 00:07:26.817 "subsystem": "bdev", 00:07:26.817 "config": [ 00:07:26.817 { 00:07:26.817 "params": { 00:07:26.817 "block_size": 4096, 00:07:26.817 "filename": "dd_sparse_aio_disk", 00:07:26.817 "name": "dd_aio" 00:07:26.817 }, 00:07:26.817 "method": "bdev_aio_create" 00:07:26.817 }, 00:07:26.817 { 00:07:26.817 "method": "bdev_wait_for_examine" 00:07:26.817 } 00:07:26.817 ] 00:07:26.817 } 00:07:26.817 ] 00:07:26.817 } 00:07:27.077 [2024-12-10 14:12:51.763196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.077 [2024-12-10 14:12:51.791366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.077 [2024-12-10 14:12:51.821631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.077  [2024-12-10T14:12:52.173Z] Copying: 12/36 [MB] (average 1333 MBps) 00:07:27.336 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:27.336 00:07:27.336 real 0m0.474s 00:07:27.336 user 0m0.269s 00:07:27.336 sys 0m0.256s 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:27.336 ************************************ 00:07:27.336 END TEST dd_sparse_bdev_to_file 00:07:27.336 ************************************ 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:27.336 00:07:27.336 real 0m1.881s 00:07:27.336 user 0m1.057s 00:07:27.336 sys 0m0.957s 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.336 14:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:27.336 ************************************ 00:07:27.336 END TEST spdk_dd_sparse 00:07:27.336 ************************************ 00:07:27.336 14:12:52 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:27.336 14:12:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.336 14:12:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.336 14:12:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:27.336 ************************************ 00:07:27.336 START TEST spdk_dd_negative 00:07:27.336 ************************************ 00:07:27.336 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:27.597 * Looking for test storage... 00:07:27.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.597 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.597 --rc genhtml_branch_coverage=1 00:07:27.597 --rc genhtml_function_coverage=1 00:07:27.597 --rc genhtml_legend=1 00:07:27.597 --rc geninfo_all_blocks=1 00:07:27.597 --rc geninfo_unexecuted_blocks=1 00:07:27.598 00:07:27.598 ' 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.598 --rc genhtml_branch_coverage=1 00:07:27.598 --rc genhtml_function_coverage=1 00:07:27.598 --rc genhtml_legend=1 00:07:27.598 --rc geninfo_all_blocks=1 00:07:27.598 --rc geninfo_unexecuted_blocks=1 00:07:27.598 00:07:27.598 ' 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.598 --rc genhtml_branch_coverage=1 00:07:27.598 --rc genhtml_function_coverage=1 00:07:27.598 --rc genhtml_legend=1 00:07:27.598 --rc geninfo_all_blocks=1 00:07:27.598 --rc geninfo_unexecuted_blocks=1 00:07:27.598 00:07:27.598 ' 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.598 --rc genhtml_branch_coverage=1 00:07:27.598 --rc genhtml_function_coverage=1 00:07:27.598 --rc genhtml_legend=1 00:07:27.598 --rc geninfo_all_blocks=1 00:07:27.598 --rc geninfo_unexecuted_blocks=1 00:07:27.598 00:07:27.598 ' 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.598 ************************************ 00:07:27.598 START TEST dd_invalid_arguments 00:07:27.598 ************************************ 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.598 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.900 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:27.900 00:07:27.900 CPU options: 00:07:27.900 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:27.900 (like [0,1,10]) 00:07:27.900 --lcores lcore to CPU mapping list. The list is in the format: 00:07:27.900 [<,lcores[@CPUs]>...] 00:07:27.900 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:27.900 Within the group, '-' is used for range separator, 00:07:27.900 ',' is used for single number separator. 00:07:27.900 '( )' can be omitted for single element group, 00:07:27.900 '@' can be omitted if cpus and lcores have the same value 00:07:27.900 --disable-cpumask-locks Disable CPU core lock files. 00:07:27.900 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:27.900 pollers in the app support interrupt mode) 00:07:27.900 -p, --main-core main (primary) core for DPDK 00:07:27.900 00:07:27.900 Configuration options: 00:07:27.900 -c, --config, --json JSON config file 00:07:27.900 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:27.900 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:27.900 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:27.900 --rpcs-allowed comma-separated list of permitted RPCS 00:07:27.900 --json-ignore-init-errors don't exit on invalid config entry 00:07:27.900 00:07:27.900 Memory options: 00:07:27.900 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:27.900 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:27.900 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:27.900 -R, --huge-unlink unlink huge files after initialization 00:07:27.900 -n, --mem-channels number of memory channels used for DPDK 00:07:27.900 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:27.900 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:27.900 --no-huge run without using hugepages 00:07:27.900 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:27.900 -i, --shm-id shared memory ID (optional) 00:07:27.900 -g, --single-file-segments force creating just one hugetlbfs file 00:07:27.900 00:07:27.900 PCI options: 00:07:27.900 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:27.900 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:27.900 -u, --no-pci disable PCI access 00:07:27.900 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:27.900 00:07:27.900 Log options: 00:07:27.900 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:27.900 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:27.900 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:27.900 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:27.900 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:27.900 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:27.900 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:27.900 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:27.900 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:27.900 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:27.900 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:27.900 --silence-noticelog disable notice level logging to stderr 00:07:27.900 00:07:27.900 Trace options: 00:07:27.900 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:27.900 setting 0 to disable trace (default 32768) 00:07:27.900 Tracepoints vary in size and can use more than one trace entry. 00:07:27.900 -e, --tpoint-group [:] 00:07:27.900 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:27.900 [2024-12-10 14:12:52.452245] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:07:27.900 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:27.900 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:27.900 bdev_raid, scheduler, all). 00:07:27.900 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:27.901 a tracepoint group. First tpoint inside a group can be enabled by 00:07:27.901 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:27.901 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:27.901 in /include/spdk_internal/trace_defs.h 00:07:27.901 00:07:27.901 Other options: 00:07:27.901 -h, --help show this usage 00:07:27.901 -v, --version print SPDK version 00:07:27.901 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:27.901 --env-context Opaque context for use of the env implementation 00:07:27.901 00:07:27.901 Application specific: 00:07:27.901 [--------- DD Options ---------] 00:07:27.901 --if Input file. Must specify either --if or --ib. 00:07:27.901 --ib Input bdev. Must specifier either --if or --ib 00:07:27.901 --of Output file. Must specify either --of or --ob. 00:07:27.901 --ob Output bdev. Must specify either --of or --ob. 00:07:27.901 --iflag Input file flags. 00:07:27.901 --oflag Output file flags. 00:07:27.901 --bs I/O unit size (default: 4096) 00:07:27.901 --qd Queue depth (default: 2) 00:07:27.901 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:27.901 --skip Skip this many I/O units at start of input. (default: 0) 00:07:27.901 --seek Skip this many I/O units at start of output. (default: 0) 00:07:27.901 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:27.901 --sparse Enable hole skipping in input target 00:07:27.901 Available iflag and oflag values: 00:07:27.901 append - append mode 00:07:27.901 direct - use direct I/O for data 00:07:27.901 directory - fail unless a directory 00:07:27.901 dsync - use synchronized I/O for data 00:07:27.901 noatime - do not update access time 00:07:27.901 noctty - do not assign controlling terminal from file 00:07:27.901 nofollow - do not follow symlinks 00:07:27.901 nonblock - use non-blocking I/O 00:07:27.901 sync - use synchronized I/O for data and metadata 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.901 00:07:27.901 real 0m0.069s 00:07:27.901 user 0m0.048s 00:07:27.901 sys 0m0.021s 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:27.901 ************************************ 00:07:27.901 END TEST dd_invalid_arguments 00:07:27.901 ************************************ 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.901 ************************************ 00:07:27.901 START TEST dd_double_input 00:07:27.901 ************************************ 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.901 [2024-12-10 14:12:52.577731] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.901 00:07:27.901 real 0m0.077s 00:07:27.901 user 0m0.049s 00:07:27.901 sys 0m0.026s 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:27.901 ************************************ 00:07:27.901 END TEST dd_double_input 00:07:27.901 ************************************ 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.901 ************************************ 00:07:27.901 START TEST dd_double_output 00:07:27.901 ************************************ 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.901 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.901 [2024-12-10 14:12:52.704527] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.184 00:07:28.184 real 0m0.068s 00:07:28.184 user 0m0.035s 00:07:28.184 sys 0m0.033s 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.184 ************************************ 00:07:28.184 END TEST dd_double_output 00:07:28.184 ************************************ 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.184 ************************************ 00:07:28.184 START TEST dd_no_input 00:07:28.184 ************************************ 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.184 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:28.185 [2024-12-10 14:12:52.835867] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.185 00:07:28.185 real 0m0.083s 00:07:28.185 user 0m0.051s 00:07:28.185 sys 0m0.030s 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:28.185 ************************************ 00:07:28.185 END TEST dd_no_input 00:07:28.185 ************************************ 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.185 ************************************ 00:07:28.185 START TEST dd_no_output 00:07:28.185 ************************************ 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.185 [2024-12-10 14:12:52.968802] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.185 00:07:28.185 real 0m0.077s 00:07:28.185 user 0m0.044s 00:07:28.185 sys 0m0.032s 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.185 14:12:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:28.185 ************************************ 00:07:28.185 END TEST dd_no_output 00:07:28.185 ************************************ 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.444 ************************************ 00:07:28.444 START TEST dd_wrong_blocksize 00:07:28.444 ************************************ 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:28.444 [2024-12-10 14:12:53.099774] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.444 00:07:28.444 real 0m0.078s 00:07:28.444 user 0m0.053s 00:07:28.444 sys 0m0.023s 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:28.444 ************************************ 00:07:28.444 END TEST dd_wrong_blocksize 00:07:28.444 ************************************ 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.444 ************************************ 00:07:28.444 START TEST dd_smaller_blocksize 00:07:28.444 ************************************ 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.444 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.445 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.445 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.445 14:12:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:28.445 [2024-12-10 14:12:53.232028] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:28.445 [2024-12-10 14:12:53.232117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62650 ] 00:07:28.705 [2024-12-10 14:12:53.383214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.705 [2024-12-10 14:12:53.421890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.705 [2024-12-10 14:12:53.453918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.964 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:29.224 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:29.224 [2024-12-10 14:12:53.945151] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:29.224 [2024-12-10 14:12:53.945224] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.224 [2024-12-10 14:12:54.016772] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:29.483 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.484 00:07:29.484 real 0m0.900s 00:07:29.484 user 0m0.344s 00:07:29.484 sys 0m0.449s 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:29.484 ************************************ 00:07:29.484 END TEST dd_smaller_blocksize 00:07:29.484 ************************************ 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.484 ************************************ 00:07:29.484 START TEST dd_invalid_count 00:07:29.484 ************************************ 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:29.484 [2024-12-10 14:12:54.185609] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.484 00:07:29.484 real 0m0.080s 00:07:29.484 user 0m0.042s 00:07:29.484 sys 0m0.036s 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:29.484 ************************************ 00:07:29.484 END TEST dd_invalid_count 00:07:29.484 ************************************ 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.484 ************************************ 00:07:29.484 START TEST dd_invalid_oflag 00:07:29.484 ************************************ 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.484 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.744 [2024-12-10 14:12:54.325620] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.744 00:07:29.744 real 0m0.081s 00:07:29.744 user 0m0.052s 00:07:29.744 sys 0m0.028s 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:29.744 ************************************ 00:07:29.744 END TEST dd_invalid_oflag 00:07:29.744 ************************************ 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.744 ************************************ 00:07:29.744 START TEST dd_invalid_iflag 00:07:29.744 ************************************ 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.744 [2024-12-10 14:12:54.459035] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.744 00:07:29.744 real 0m0.077s 00:07:29.744 user 0m0.046s 00:07:29.744 sys 0m0.030s 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:29.744 ************************************ 00:07:29.744 END TEST dd_invalid_iflag 00:07:29.744 ************************************ 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.744 ************************************ 00:07:29.744 START TEST dd_unknown_flag 00:07:29.744 ************************************ 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:29.744 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.745 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:30.004 [2024-12-10 14:12:54.597489] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:30.004 [2024-12-10 14:12:54.597609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62743 ] 00:07:30.004 [2024-12-10 14:12:54.745820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.004 [2024-12-10 14:12:54.777793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.004 [2024-12-10 14:12:54.809515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.004 [2024-12-10 14:12:54.829702] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:30.004 [2024-12-10 14:12:54.829789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.004 [2024-12-10 14:12:54.829840] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:30.004 [2024-12-10 14:12:54.829858] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.004 [2024-12-10 14:12:54.830143] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:30.004 [2024-12-10 14:12:54.830164] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.004 [2024-12-10 14:12:54.830246] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:30.004 [2024-12-10 14:12:54.830271] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:30.264 [2024-12-10 14:12:54.896492] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.264 00:07:30.264 real 0m0.419s 00:07:30.264 user 0m0.219s 00:07:30.264 sys 0m0.109s 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.264 ************************************ 00:07:30.264 END TEST dd_unknown_flag 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:30.264 ************************************ 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.264 14:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.264 ************************************ 00:07:30.264 START TEST dd_invalid_json 00:07:30.264 ************************************ 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.264 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.264 [2024-12-10 14:12:55.092635] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:30.264 [2024-12-10 14:12:55.092804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62767 ] 00:07:30.522 [2024-12-10 14:12:55.240113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.522 [2024-12-10 14:12:55.268751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.522 [2024-12-10 14:12:55.268852] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:30.522 [2024-12-10 14:12:55.268870] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.522 [2024-12-10 14:12:55.268878] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.522 [2024-12-10 14:12:55.268913] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.522 00:07:30.522 real 0m0.315s 00:07:30.522 user 0m0.149s 00:07:30.522 sys 0m0.064s 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.522 ************************************ 00:07:30.522 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.522 END TEST dd_invalid_json 00:07:30.522 ************************************ 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.781 ************************************ 00:07:30.781 START TEST dd_invalid_seek 00:07:30.781 ************************************ 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.781 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:30.781 [2024-12-10 14:12:55.444546] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:30.781 [2024-12-10 14:12:55.444651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62801 ] 00:07:30.781 { 00:07:30.781 "subsystems": [ 00:07:30.781 { 00:07:30.781 "subsystem": "bdev", 00:07:30.781 "config": [ 00:07:30.781 { 00:07:30.781 "params": { 00:07:30.781 "block_size": 512, 00:07:30.781 "num_blocks": 512, 00:07:30.781 "name": "malloc0" 00:07:30.781 }, 00:07:30.781 "method": "bdev_malloc_create" 00:07:30.781 }, 00:07:30.781 { 00:07:30.781 "params": { 00:07:30.781 "block_size": 512, 00:07:30.781 "num_blocks": 512, 00:07:30.781 "name": "malloc1" 00:07:30.781 }, 00:07:30.781 "method": "bdev_malloc_create" 00:07:30.781 }, 00:07:30.781 { 00:07:30.781 "method": "bdev_wait_for_examine" 00:07:30.781 } 00:07:30.781 ] 00:07:30.781 } 00:07:30.781 ] 00:07:30.781 } 00:07:30.781 [2024-12-10 14:12:55.590915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.041 [2024-12-10 14:12:55.619630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.041 [2024-12-10 14:12:55.646765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.041 [2024-12-10 14:12:55.690327] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:31.041 [2024-12-10 14:12:55.690443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.041 [2024-12-10 14:12:55.758769] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.041 00:07:31.041 real 0m0.432s 00:07:31.041 user 0m0.282s 00:07:31.041 sys 0m0.115s 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.041 ************************************ 00:07:31.041 END TEST dd_invalid_seek 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:31.041 ************************************ 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:31.041 ************************************ 00:07:31.041 START TEST dd_invalid_skip 00:07:31.041 ************************************ 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.041 14:12:55 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:31.300 [2024-12-10 14:12:55.927860] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:31.300 [2024-12-10 14:12:55.928647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62829 ] 00:07:31.300 { 00:07:31.300 "subsystems": [ 00:07:31.300 { 00:07:31.300 "subsystem": "bdev", 00:07:31.300 "config": [ 00:07:31.300 { 00:07:31.300 "params": { 00:07:31.300 "block_size": 512, 00:07:31.300 "num_blocks": 512, 00:07:31.300 "name": "malloc0" 00:07:31.300 }, 00:07:31.300 "method": "bdev_malloc_create" 00:07:31.300 }, 00:07:31.300 { 00:07:31.300 "params": { 00:07:31.300 "block_size": 512, 00:07:31.300 "num_blocks": 512, 00:07:31.300 "name": "malloc1" 00:07:31.300 }, 00:07:31.300 "method": "bdev_malloc_create" 00:07:31.300 }, 00:07:31.300 { 00:07:31.300 "method": "bdev_wait_for_examine" 00:07:31.300 } 00:07:31.300 ] 00:07:31.300 } 00:07:31.300 ] 00:07:31.300 } 00:07:31.300 [2024-12-10 14:12:56.076920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.300 [2024-12-10 14:12:56.111599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.559 [2024-12-10 14:12:56.141809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.559 [2024-12-10 14:12:56.186736] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:31.559 [2024-12-10 14:12:56.186819] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.559 [2024-12-10 14:12:56.248319] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.559 00:07:31.559 real 0m0.435s 00:07:31.559 user 0m0.281s 00:07:31.559 sys 0m0.117s 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.559 ************************************ 00:07:31.559 END TEST dd_invalid_skip 00:07:31.559 ************************************ 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:31.559 ************************************ 00:07:31.559 START TEST dd_invalid_input_count 00:07:31.559 ************************************ 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.559 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:31.819 [2024-12-10 14:12:56.411299] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:31.819 [2024-12-10 14:12:56.411397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62868 ] 00:07:31.819 { 00:07:31.819 "subsystems": [ 00:07:31.819 { 00:07:31.819 "subsystem": "bdev", 00:07:31.819 "config": [ 00:07:31.819 { 00:07:31.819 "params": { 00:07:31.819 "block_size": 512, 00:07:31.819 "num_blocks": 512, 00:07:31.819 "name": "malloc0" 00:07:31.819 }, 00:07:31.819 "method": "bdev_malloc_create" 00:07:31.819 }, 00:07:31.819 { 00:07:31.819 "params": { 00:07:31.819 "block_size": 512, 00:07:31.819 "num_blocks": 512, 00:07:31.819 "name": "malloc1" 00:07:31.819 }, 00:07:31.819 "method": "bdev_malloc_create" 00:07:31.819 }, 00:07:31.819 { 00:07:31.819 "method": "bdev_wait_for_examine" 00:07:31.819 } 00:07:31.819 ] 00:07:31.819 } 00:07:31.819 ] 00:07:31.819 } 00:07:31.819 [2024-12-10 14:12:56.555904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.819 [2024-12-10 14:12:56.583053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.819 [2024-12-10 14:12:56.610224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.819 [2024-12-10 14:12:56.653682] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:31.819 [2024-12-10 14:12:56.653750] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.078 [2024-12-10 14:12:56.713902] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.078 00:07:32.078 real 0m0.414s 00:07:32.078 user 0m0.282s 00:07:32.078 sys 0m0.095s 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.078 ************************************ 00:07:32.078 END TEST dd_invalid_input_count 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 ************************************ 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.078 14:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 ************************************ 00:07:32.078 START TEST dd_invalid_output_count 00:07:32.078 ************************************ 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.079 14:12:56 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:32.079 { 00:07:32.079 "subsystems": [ 00:07:32.079 { 00:07:32.079 "subsystem": "bdev", 00:07:32.079 "config": [ 00:07:32.079 { 00:07:32.079 "params": { 00:07:32.079 "block_size": 512, 00:07:32.079 "num_blocks": 512, 00:07:32.079 "name": "malloc0" 00:07:32.079 }, 00:07:32.079 "method": "bdev_malloc_create" 00:07:32.079 }, 00:07:32.079 { 00:07:32.079 "method": "bdev_wait_for_examine" 00:07:32.079 } 00:07:32.079 ] 00:07:32.079 } 00:07:32.079 ] 00:07:32.079 } 00:07:32.079 [2024-12-10 14:12:56.881726] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:32.079 [2024-12-10 14:12:56.881823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62896 ] 00:07:32.338 [2024-12-10 14:12:57.026619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.338 [2024-12-10 14:12:57.052919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.338 [2024-12-10 14:12:57.079466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.338 [2024-12-10 14:12:57.114648] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:32.338 [2024-12-10 14:12:57.114729] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.597 [2024-12-10 14:12:57.182466] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.597 00:07:32.597 real 0m0.421s 00:07:32.597 user 0m0.275s 00:07:32.597 sys 0m0.104s 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.597 ************************************ 00:07:32.597 END TEST dd_invalid_output_count 00:07:32.597 ************************************ 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:32.597 14:12:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:32.598 ************************************ 00:07:32.598 START TEST dd_bs_not_multiple 00:07:32.598 ************************************ 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.598 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:32.598 [2024-12-10 14:12:57.353976] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:32.598 [2024-12-10 14:12:57.354083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62933 ] 00:07:32.598 { 00:07:32.598 "subsystems": [ 00:07:32.598 { 00:07:32.598 "subsystem": "bdev", 00:07:32.598 "config": [ 00:07:32.598 { 00:07:32.598 "params": { 00:07:32.598 "block_size": 512, 00:07:32.598 "num_blocks": 512, 00:07:32.598 "name": "malloc0" 00:07:32.598 }, 00:07:32.598 "method": "bdev_malloc_create" 00:07:32.598 }, 00:07:32.598 { 00:07:32.598 "params": { 00:07:32.598 "block_size": 512, 00:07:32.598 "num_blocks": 512, 00:07:32.598 "name": "malloc1" 00:07:32.598 }, 00:07:32.598 "method": "bdev_malloc_create" 00:07:32.598 }, 00:07:32.598 { 00:07:32.598 "method": "bdev_wait_for_examine" 00:07:32.598 } 00:07:32.598 ] 00:07:32.598 } 00:07:32.598 ] 00:07:32.598 } 00:07:32.857 [2024-12-10 14:12:57.493772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.857 [2024-12-10 14:12:57.521045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.857 [2024-12-10 14:12:57.548019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.857 [2024-12-10 14:12:57.591229] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:32.857 [2024-12-10 14:12:57.591302] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.857 [2024-12-10 14:12:57.650561] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.116 00:07:33.116 real 0m0.408s 00:07:33.116 user 0m0.261s 00:07:33.116 sys 0m0.111s 00:07:33.116 ************************************ 00:07:33.116 END TEST dd_bs_not_multiple 00:07:33.116 ************************************ 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 00:07:33.116 real 0m5.576s 00:07:33.116 user 0m2.937s 00:07:33.116 sys 0m2.063s 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.116 14:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 ************************************ 00:07:33.116 END TEST spdk_dd_negative 00:07:33.116 ************************************ 00:07:33.116 00:07:33.116 real 1m4.434s 00:07:33.116 user 0m40.851s 00:07:33.116 sys 0m28.013s 00:07:33.116 14:12:57 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.116 14:12:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 ************************************ 00:07:33.116 END TEST spdk_dd 00:07:33.116 ************************************ 00:07:33.116 14:12:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:33.116 14:12:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:33.116 14:12:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:33.116 14:12:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.117 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.117 14:12:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:33.117 14:12:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:33.117 14:12:57 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:33.117 14:12:57 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:33.117 14:12:57 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:33.117 14:12:57 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:33.117 14:12:57 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:33.117 14:12:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.117 14:12:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.117 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.117 ************************************ 00:07:33.117 START TEST nvmf_tcp 00:07:33.117 ************************************ 00:07:33.117 14:12:57 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:33.117 * Looking for test storage... 00:07:33.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:33.376 14:12:57 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.376 14:12:57 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.376 14:12:57 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.376 14:12:58 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.376 --rc genhtml_branch_coverage=1 00:07:33.376 --rc genhtml_function_coverage=1 00:07:33.376 --rc genhtml_legend=1 00:07:33.376 --rc geninfo_all_blocks=1 00:07:33.376 --rc geninfo_unexecuted_blocks=1 00:07:33.376 00:07:33.376 ' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.376 --rc genhtml_branch_coverage=1 00:07:33.376 --rc genhtml_function_coverage=1 00:07:33.376 --rc genhtml_legend=1 00:07:33.376 --rc geninfo_all_blocks=1 00:07:33.376 --rc geninfo_unexecuted_blocks=1 00:07:33.376 00:07:33.376 ' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.376 --rc genhtml_branch_coverage=1 00:07:33.376 --rc genhtml_function_coverage=1 00:07:33.376 --rc genhtml_legend=1 00:07:33.376 --rc geninfo_all_blocks=1 00:07:33.376 --rc geninfo_unexecuted_blocks=1 00:07:33.376 00:07:33.376 ' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.376 --rc genhtml_branch_coverage=1 00:07:33.376 --rc genhtml_function_coverage=1 00:07:33.376 --rc genhtml_legend=1 00:07:33.376 --rc geninfo_all_blocks=1 00:07:33.376 --rc geninfo_unexecuted_blocks=1 00:07:33.376 00:07:33.376 ' 00:07:33.376 14:12:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:33.376 14:12:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:33.376 14:12:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.376 14:12:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 ************************************ 00:07:33.376 START TEST nvmf_target_core 00:07:33.376 ************************************ 00:07:33.376 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:33.376 * Looking for test storage... 00:07:33.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:33.376 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.376 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.376 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.636 --rc genhtml_branch_coverage=1 00:07:33.636 --rc genhtml_function_coverage=1 00:07:33.636 --rc genhtml_legend=1 00:07:33.636 --rc geninfo_all_blocks=1 00:07:33.636 --rc geninfo_unexecuted_blocks=1 00:07:33.636 00:07:33.636 ' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.636 --rc genhtml_branch_coverage=1 00:07:33.636 --rc genhtml_function_coverage=1 00:07:33.636 --rc genhtml_legend=1 00:07:33.636 --rc geninfo_all_blocks=1 00:07:33.636 --rc geninfo_unexecuted_blocks=1 00:07:33.636 00:07:33.636 ' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.636 --rc genhtml_branch_coverage=1 00:07:33.636 --rc genhtml_function_coverage=1 00:07:33.636 --rc genhtml_legend=1 00:07:33.636 --rc geninfo_all_blocks=1 00:07:33.636 --rc geninfo_unexecuted_blocks=1 00:07:33.636 00:07:33.636 ' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.636 --rc genhtml_branch_coverage=1 00:07:33.636 --rc genhtml_function_coverage=1 00:07:33.636 --rc genhtml_legend=1 00:07:33.636 --rc geninfo_all_blocks=1 00:07:33.636 --rc geninfo_unexecuted_blocks=1 00:07:33.636 00:07:33.636 ' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.636 14:12:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.637 ************************************ 00:07:33.637 START TEST nvmf_host_management 00:07:33.637 ************************************ 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:33.637 * Looking for test storage... 00:07:33.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.637 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.898 --rc genhtml_branch_coverage=1 00:07:33.898 --rc genhtml_function_coverage=1 00:07:33.898 --rc genhtml_legend=1 00:07:33.898 --rc geninfo_all_blocks=1 00:07:33.898 --rc geninfo_unexecuted_blocks=1 00:07:33.898 00:07:33.898 ' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.898 --rc genhtml_branch_coverage=1 00:07:33.898 --rc genhtml_function_coverage=1 00:07:33.898 --rc genhtml_legend=1 00:07:33.898 --rc geninfo_all_blocks=1 00:07:33.898 --rc geninfo_unexecuted_blocks=1 00:07:33.898 00:07:33.898 ' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.898 --rc genhtml_branch_coverage=1 00:07:33.898 --rc genhtml_function_coverage=1 00:07:33.898 --rc genhtml_legend=1 00:07:33.898 --rc geninfo_all_blocks=1 00:07:33.898 --rc geninfo_unexecuted_blocks=1 00:07:33.898 00:07:33.898 ' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.898 --rc genhtml_branch_coverage=1 00:07:33.898 --rc genhtml_function_coverage=1 00:07:33.898 --rc genhtml_legend=1 00:07:33.898 --rc geninfo_all_blocks=1 00:07:33.898 --rc geninfo_unexecuted_blocks=1 00:07:33.898 00:07:33.898 ' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.898 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:33.899 Cannot find device "nvmf_init_br" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:33.899 Cannot find device "nvmf_init_br2" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:33.899 Cannot find device "nvmf_tgt_br" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.899 Cannot find device "nvmf_tgt_br2" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:33.899 Cannot find device "nvmf_init_br" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:33.899 Cannot find device "nvmf_init_br2" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:33.899 Cannot find device "nvmf_tgt_br" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:33.899 Cannot find device "nvmf_tgt_br2" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:33.899 Cannot find device "nvmf_br" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:33.899 Cannot find device "nvmf_init_if" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:33.899 Cannot find device "nvmf_init_if2" 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:33.899 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:34.159 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:34.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:07:34.160 00:07:34.160 --- 10.0.0.3 ping statistics --- 00:07:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.160 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:34.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:34.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:07:34.160 00:07:34.160 --- 10.0.0.4 ping statistics --- 00:07:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.160 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:34.160 00:07:34.160 --- 10.0.0.1 ping statistics --- 00:07:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.160 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:34.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:07:34.160 00:07:34.160 --- 10.0.0.2 ping statistics --- 00:07:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.160 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.160 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.420 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=63266 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 63266 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63266 ']' 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.420 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.420 [2024-12-10 14:12:59.083447] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:34.420 [2024-12-10 14:12:59.083539] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.420 [2024-12-10 14:12:59.238619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.679 [2024-12-10 14:12:59.282047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.679 [2024-12-10 14:12:59.282112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.679 [2024-12-10 14:12:59.282125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.679 [2024-12-10 14:12:59.282136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.679 [2024-12-10 14:12:59.282145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.679 [2024-12-10 14:12:59.283069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.679 [2024-12-10 14:12:59.283201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.680 [2024-12-10 14:12:59.283349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:34.680 [2024-12-10 14:12:59.283355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.680 [2024-12-10 14:12:59.318586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.680 [2024-12-10 14:12:59.415298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.680 Malloc0 00:07:34.680 [2024-12-10 14:12:59.481460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.680 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=63318 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 63318 /var/tmp/bdevperf.sock 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63318 ']' 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:34.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:34.939 { 00:07:34.939 "params": { 00:07:34.939 "name": "Nvme$subsystem", 00:07:34.939 "trtype": "$TEST_TRANSPORT", 00:07:34.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.939 "adrfam": "ipv4", 00:07:34.939 "trsvcid": "$NVMF_PORT", 00:07:34.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.939 "hdgst": ${hdgst:-false}, 00:07:34.939 "ddgst": ${ddgst:-false} 00:07:34.939 }, 00:07:34.939 "method": "bdev_nvme_attach_controller" 00:07:34.939 } 00:07:34.939 EOF 00:07:34.939 )") 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:34.939 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:34.939 "params": { 00:07:34.939 "name": "Nvme0", 00:07:34.939 "trtype": "tcp", 00:07:34.939 "traddr": "10.0.0.3", 00:07:34.939 "adrfam": "ipv4", 00:07:34.939 "trsvcid": "4420", 00:07:34.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.939 "hdgst": false, 00:07:34.939 "ddgst": false 00:07:34.939 }, 00:07:34.939 "method": "bdev_nvme_attach_controller" 00:07:34.939 }' 00:07:34.939 [2024-12-10 14:12:59.584512] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:34.939 [2024-12-10 14:12:59.584618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63318 ] 00:07:34.939 [2024-12-10 14:12:59.736825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.201 [2024-12-10 14:12:59.775064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.201 [2024-12-10 14:12:59.816461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.201 Running I/O for 10 seconds... 00:07:35.201 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.201 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:35.201 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:35.201 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.201 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.201 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.460 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:35.460 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:35.460 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.721 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.721 [2024-12-10 14:13:00.374326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.721 [2024-12-10 14:13:00.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.721 [2024-12-10 14:13:00.374971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.374983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.374993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.722 [2024-12-10 14:13:00.375717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.722 [2024-12-10 14:13:00.375726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.723 [2024-12-10 14:13:00.377003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:35.723 task offset: 86656 on job bdev=Nvme0n1 fails 00:07:35.723 00:07:35.723 Latency(us) 00:07:35.723 [2024-12-10T14:13:00.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.723 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.723 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:35.723 Verification LBA range: start 0x0 length 0x400 00:07:35.723 Nvme0n1 : 0.45 1422.88 88.93 142.29 0.00 39301.59 2010.76 43611.23 00:07:35.723 [2024-12-10T14:13:00.560Z] =================================================================================================================== 00:07:35.723 [2024-12-10T14:13:00.560Z] Total : 1422.88 88.93 142.29 0.00 39301.59 2010.76 43611.23 00:07:35.723 [2024-12-10 14:13:00.379079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.723 [2024-12-10 14:13:00.379106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbece0 (9): Bad file descriptor 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.723 [2024-12-10 14:13:00.381869] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:35.723 [2024-12-10 14:13:00.381981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:35.723 [2024-12-10 14:13:00.382019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:35.723 [2024-12-10 14:13:00.382039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:35.723 [2024-12-10 14:13:00.382049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:35.723 [2024-12-10 14:13:00.382059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:35.723 [2024-12-10 14:13:00.382067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdbece0 00:07:35.723 [2024-12-10 14:13:00.382103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbece0 (9): Bad file descriptor 00:07:35.723 [2024-12-10 14:13:00.382122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:35.723 [2024-12-10 14:13:00.382132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.723 [2024-12-10 14:13:00.382143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:35.723 [2024-12-10 14:13:00.382154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.723 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 63318 00:07:36.660 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (63318) - No such process 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.660 { 00:07:36.660 "params": { 00:07:36.660 "name": "Nvme$subsystem", 00:07:36.660 "trtype": "$TEST_TRANSPORT", 00:07:36.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.660 "adrfam": "ipv4", 00:07:36.660 "trsvcid": "$NVMF_PORT", 00:07:36.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.660 "hdgst": ${hdgst:-false}, 00:07:36.660 "ddgst": ${ddgst:-false} 00:07:36.660 }, 00:07:36.660 "method": "bdev_nvme_attach_controller" 00:07:36.660 } 00:07:36.660 EOF 00:07:36.660 )") 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:36.660 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.660 "params": { 00:07:36.660 "name": "Nvme0", 00:07:36.660 "trtype": "tcp", 00:07:36.660 "traddr": "10.0.0.3", 00:07:36.660 "adrfam": "ipv4", 00:07:36.660 "trsvcid": "4420", 00:07:36.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:36.660 "hdgst": false, 00:07:36.660 "ddgst": false 00:07:36.660 }, 00:07:36.660 "method": "bdev_nvme_attach_controller" 00:07:36.660 }' 00:07:36.660 [2024-12-10 14:13:01.451595] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:36.660 [2024-12-10 14:13:01.451692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63358 ] 00:07:36.919 [2024-12-10 14:13:01.596936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.919 [2024-12-10 14:13:01.627843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.919 [2024-12-10 14:13:01.664660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.179 Running I/O for 1 seconds... 00:07:38.116 1600.00 IOPS, 100.00 MiB/s 00:07:38.116 Latency(us) 00:07:38.116 [2024-12-10T14:13:02.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.116 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:38.116 Verification LBA range: start 0x0 length 0x400 00:07:38.116 Nvme0n1 : 1.03 1609.96 100.62 0.00 0.00 39004.83 3872.58 34317.03 00:07:38.116 [2024-12-10T14:13:02.953Z] =================================================================================================================== 00:07:38.116 [2024-12-10T14:13:02.953Z] Total : 1609.96 100.62 0.00 0.00 39004.83 3872.58 34317.03 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.116 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:38.375 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.375 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:38.375 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.375 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.375 rmmod nvme_tcp 00:07:38.375 rmmod nvme_fabrics 00:07:38.375 rmmod nvme_keyring 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 63266 ']' 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 63266 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 63266 ']' 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 63266 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63266 00:07:38.375 killing process with pid 63266 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63266' 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 63266 00:07:38.375 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 63266 00:07:38.375 [2024-12-10 14:13:03.187774] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.679 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:38.680 00:07:38.680 real 0m5.128s 00:07:38.680 user 0m17.974s 00:07:38.680 sys 0m1.359s 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.680 ************************************ 00:07:38.680 END TEST nvmf_host_management 00:07:38.680 ************************************ 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.680 14:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.939 ************************************ 00:07:38.939 START TEST nvmf_lvol 00:07:38.939 ************************************ 00:07:38.939 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.939 * Looking for test storage... 00:07:38.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:38.939 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.939 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.940 --rc genhtml_branch_coverage=1 00:07:38.940 --rc genhtml_function_coverage=1 00:07:38.940 --rc genhtml_legend=1 00:07:38.940 --rc geninfo_all_blocks=1 00:07:38.940 --rc geninfo_unexecuted_blocks=1 00:07:38.940 00:07:38.940 ' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.940 --rc genhtml_branch_coverage=1 00:07:38.940 --rc genhtml_function_coverage=1 00:07:38.940 --rc genhtml_legend=1 00:07:38.940 --rc geninfo_all_blocks=1 00:07:38.940 --rc geninfo_unexecuted_blocks=1 00:07:38.940 00:07:38.940 ' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.940 --rc genhtml_branch_coverage=1 00:07:38.940 --rc genhtml_function_coverage=1 00:07:38.940 --rc genhtml_legend=1 00:07:38.940 --rc geninfo_all_blocks=1 00:07:38.940 --rc geninfo_unexecuted_blocks=1 00:07:38.940 00:07:38.940 ' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.940 --rc genhtml_branch_coverage=1 00:07:38.940 --rc genhtml_function_coverage=1 00:07:38.940 --rc genhtml_legend=1 00:07:38.940 --rc geninfo_all_blocks=1 00:07:38.940 --rc geninfo_unexecuted_blocks=1 00:07:38.940 00:07:38.940 ' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:38.940 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:38.941 Cannot find device "nvmf_init_br" 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:38.941 Cannot find device "nvmf_init_br2" 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:38.941 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:39.200 Cannot find device "nvmf_tgt_br" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:39.200 Cannot find device "nvmf_tgt_br2" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:39.200 Cannot find device "nvmf_init_br" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:39.200 Cannot find device "nvmf_init_br2" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:39.200 Cannot find device "nvmf_tgt_br" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:39.200 Cannot find device "nvmf_tgt_br2" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:39.200 Cannot find device "nvmf_br" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:39.200 Cannot find device "nvmf_init_if" 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:39.200 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:39.201 Cannot find device "nvmf_init_if2" 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:39.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:39.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:39.201 14:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:39.201 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:39.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:39.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:39.460 00:07:39.460 --- 10.0.0.3 ping statistics --- 00:07:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.460 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:39.460 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:39.460 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:07:39.460 00:07:39.460 --- 10.0.0.4 ping statistics --- 00:07:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.460 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:39.460 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:39.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:39.461 00:07:39.461 --- 10.0.0.1 ping statistics --- 00:07:39.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.461 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:39.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:39.461 00:07:39.461 --- 10.0.0.2 ping statistics --- 00:07:39.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.461 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63618 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63618 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63618 ']' 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.461 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.461 [2024-12-10 14:13:04.232371] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:39.461 [2024-12-10 14:13:04.232492] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.720 [2024-12-10 14:13:04.380794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.720 [2024-12-10 14:13:04.411372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.720 [2024-12-10 14:13:04.411443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.720 [2024-12-10 14:13:04.411469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.720 [2024-12-10 14:13:04.411476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.720 [2024-12-10 14:13:04.411483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.720 [2024-12-10 14:13:04.412338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.720 [2024-12-10 14:13:04.412470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.720 [2024-12-10 14:13:04.412478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.720 [2024-12-10 14:13:04.441097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.720 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:40.288 [2024-12-10 14:13:04.821137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.288 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.546 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:40.546 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.805 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:40.805 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:41.064 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:41.322 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=12cda165-96a1-40f6-a2f4-a75921efd868 00:07:41.322 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 12cda165-96a1-40f6-a2f4-a75921efd868 lvol 20 00:07:41.581 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9c37c89d-740a-4f8c-83dd-bafc8cb41f49 00:07:41.581 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.840 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c37c89d-740a-4f8c-83dd-bafc8cb41f49 00:07:42.099 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:42.357 [2024-12-10 14:13:06.999030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:42.357 14:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:42.616 14:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:42.616 14:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63686 00:07:42.616 14:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:43.552 14:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 9c37c89d-740a-4f8c-83dd-bafc8cb41f49 MY_SNAPSHOT 00:07:43.811 14:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=80dd9e1c-a2ea-411a-a964-158fcaff525f 00:07:43.811 14:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 9c37c89d-740a-4f8c-83dd-bafc8cb41f49 30 00:07:44.379 14:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 80dd9e1c-a2ea-411a-a964-158fcaff525f MY_CLONE 00:07:44.637 14:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=212da903-d999-43bb-8879-3541e57e28c4 00:07:44.638 14:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 212da903-d999-43bb-8879-3541e57e28c4 00:07:45.205 14:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63686 00:07:53.320 Initializing NVMe Controllers 00:07:53.320 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:53.320 Controller IO queue size 128, less than required. 00:07:53.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.320 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:53.320 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:53.320 Initialization complete. Launching workers. 00:07:53.320 ======================================================== 00:07:53.320 Latency(us) 00:07:53.320 Device Information : IOPS MiB/s Average min max 00:07:53.320 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10518.30 41.09 12179.29 891.62 76474.64 00:07:53.320 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10468.30 40.89 12235.62 2444.40 47639.47 00:07:53.320 ======================================================== 00:07:53.320 Total : 20986.60 81.98 12207.39 891.62 76474.64 00:07:53.320 00:07:53.320 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:53.320 14:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9c37c89d-740a-4f8c-83dd-bafc8cb41f49 00:07:53.320 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12cda165-96a1-40f6-a2f4-a75921efd868 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.927 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.927 rmmod nvme_tcp 00:07:53.927 rmmod nvme_fabrics 00:07:53.927 rmmod nvme_keyring 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63618 ']' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63618 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63618 ']' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63618 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63618 00:07:53.928 killing process with pid 63618 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63618' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63618 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63618 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:53.928 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.186 14:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:54.445 00:07:54.445 real 0m15.501s 00:07:54.445 user 1m3.972s 00:07:54.445 sys 0m4.311s 00:07:54.445 ************************************ 00:07:54.445 END TEST nvmf_lvol 00:07:54.445 ************************************ 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.445 ************************************ 00:07:54.445 START TEST nvmf_lvs_grow 00:07:54.445 ************************************ 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:54.445 * Looking for test storage... 00:07:54.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.445 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.446 --rc genhtml_branch_coverage=1 00:07:54.446 --rc genhtml_function_coverage=1 00:07:54.446 --rc genhtml_legend=1 00:07:54.446 --rc geninfo_all_blocks=1 00:07:54.446 --rc geninfo_unexecuted_blocks=1 00:07:54.446 00:07:54.446 ' 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.446 --rc genhtml_branch_coverage=1 00:07:54.446 --rc genhtml_function_coverage=1 00:07:54.446 --rc genhtml_legend=1 00:07:54.446 --rc geninfo_all_blocks=1 00:07:54.446 --rc geninfo_unexecuted_blocks=1 00:07:54.446 00:07:54.446 ' 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.446 --rc genhtml_branch_coverage=1 00:07:54.446 --rc genhtml_function_coverage=1 00:07:54.446 --rc genhtml_legend=1 00:07:54.446 --rc geninfo_all_blocks=1 00:07:54.446 --rc geninfo_unexecuted_blocks=1 00:07:54.446 00:07:54.446 ' 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.446 --rc genhtml_branch_coverage=1 00:07:54.446 --rc genhtml_function_coverage=1 00:07:54.446 --rc genhtml_legend=1 00:07:54.446 --rc geninfo_all_blocks=1 00:07:54.446 --rc geninfo_unexecuted_blocks=1 00:07:54.446 00:07:54.446 ' 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.446 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.705 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:54.706 Cannot find device "nvmf_init_br" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:54.706 Cannot find device "nvmf_init_br2" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:54.706 Cannot find device "nvmf_tgt_br" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.706 Cannot find device "nvmf_tgt_br2" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:54.706 Cannot find device "nvmf_init_br" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:54.706 Cannot find device "nvmf_init_br2" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:54.706 Cannot find device "nvmf_tgt_br" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:54.706 Cannot find device "nvmf_tgt_br2" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:54.706 Cannot find device "nvmf_br" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:54.706 Cannot find device "nvmf_init_if" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:54.706 Cannot find device "nvmf_init_if2" 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:54.706 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.965 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:54.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:07:54.966 00:07:54.966 --- 10.0.0.3 ping statistics --- 00:07:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.966 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:54.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:54.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:07:54.966 00:07:54.966 --- 10.0.0.4 ping statistics --- 00:07:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.966 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:54.966 00:07:54.966 --- 10.0.0.1 ping statistics --- 00:07:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.966 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:54.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:54.966 00:07:54.966 --- 10.0.0.2 ping statistics --- 00:07:54.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.966 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=64069 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 64069 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 64069 ']' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.966 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 [2024-12-10 14:13:19.750709] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:54.966 [2024-12-10 14:13:19.751029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.225 [2024-12-10 14:13:19.906237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.225 [2024-12-10 14:13:19.944229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.225 [2024-12-10 14:13:19.944533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.225 [2024-12-10 14:13:19.944705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.225 [2024-12-10 14:13:19.944861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.225 [2024-12-10 14:13:19.944914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.225 [2024-12-10 14:13:19.945399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.225 [2024-12-10 14:13:19.978677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.225 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.225 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:55.225 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.225 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.225 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.484 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.484 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.743 [2024-12-10 14:13:20.397368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.743 ************************************ 00:07:55.743 START TEST lvs_grow_clean 00:07:55.743 ************************************ 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.743 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.002 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:56.002 14:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:56.261 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9831a038-6a0c-4c49-8332-367b1e94aec9 00:07:56.261 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:07:56.261 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.519 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.519 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.519 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9831a038-6a0c-4c49-8332-367b1e94aec9 lvol 150 00:07:56.778 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2026e9fc-bb89-4577-bc19-4db671399741 00:07:56.778 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.778 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:57.037 [2024-12-10 14:13:21.812924] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:57.037 [2024-12-10 14:13:21.813060] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:57.037 true 00:07:57.037 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:07:57.037 14:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:57.296 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:57.296 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.863 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2026e9fc-bb89-4577-bc19-4db671399741 00:07:57.863 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:58.122 [2024-12-10 14:13:22.917542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:58.122 14:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64146 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64146 /var/tmp/bdevperf.sock 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 64146 ']' 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.380 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.381 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.381 14:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:58.639 [2024-12-10 14:13:23.218882] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:07:58.639 [2024-12-10 14:13:23.219003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64146 ] 00:07:58.639 [2024-12-10 14:13:23.369825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.639 [2024-12-10 14:13:23.409701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.639 [2024-12-10 14:13:23.446020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.574 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.574 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:59.574 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:59.832 Nvme0n1 00:07:59.832 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.091 [ 00:08:00.091 { 00:08:00.091 "name": "Nvme0n1", 00:08:00.091 "aliases": [ 00:08:00.091 "2026e9fc-bb89-4577-bc19-4db671399741" 00:08:00.091 ], 00:08:00.091 "product_name": "NVMe disk", 00:08:00.091 "block_size": 4096, 00:08:00.091 "num_blocks": 38912, 00:08:00.091 "uuid": "2026e9fc-bb89-4577-bc19-4db671399741", 00:08:00.091 "numa_id": -1, 00:08:00.091 "assigned_rate_limits": { 00:08:00.091 "rw_ios_per_sec": 0, 00:08:00.091 "rw_mbytes_per_sec": 0, 00:08:00.091 "r_mbytes_per_sec": 0, 00:08:00.091 "w_mbytes_per_sec": 0 00:08:00.091 }, 00:08:00.091 "claimed": false, 00:08:00.091 "zoned": false, 00:08:00.091 "supported_io_types": { 00:08:00.091 "read": true, 00:08:00.091 "write": true, 00:08:00.091 "unmap": true, 00:08:00.091 "flush": true, 00:08:00.091 "reset": true, 00:08:00.091 "nvme_admin": true, 00:08:00.091 "nvme_io": true, 00:08:00.091 "nvme_io_md": false, 00:08:00.091 "write_zeroes": true, 00:08:00.091 "zcopy": false, 00:08:00.091 "get_zone_info": false, 00:08:00.091 "zone_management": false, 00:08:00.091 "zone_append": false, 00:08:00.091 "compare": true, 00:08:00.091 "compare_and_write": true, 00:08:00.091 "abort": true, 00:08:00.091 "seek_hole": false, 00:08:00.091 "seek_data": false, 00:08:00.091 "copy": true, 00:08:00.091 "nvme_iov_md": false 00:08:00.091 }, 00:08:00.091 "memory_domains": [ 00:08:00.091 { 00:08:00.091 "dma_device_id": "system", 00:08:00.091 "dma_device_type": 1 00:08:00.091 } 00:08:00.091 ], 00:08:00.091 "driver_specific": { 00:08:00.091 "nvme": [ 00:08:00.091 { 00:08:00.091 "trid": { 00:08:00.091 "trtype": "TCP", 00:08:00.091 "adrfam": "IPv4", 00:08:00.091 "traddr": "10.0.0.3", 00:08:00.091 "trsvcid": "4420", 00:08:00.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.091 }, 00:08:00.091 "ctrlr_data": { 00:08:00.091 "cntlid": 1, 00:08:00.091 "vendor_id": "0x8086", 00:08:00.091 "model_number": "SPDK bdev Controller", 00:08:00.091 "serial_number": "SPDK0", 00:08:00.091 "firmware_revision": "25.01", 00:08:00.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.091 "oacs": { 00:08:00.091 "security": 0, 00:08:00.091 "format": 0, 00:08:00.091 "firmware": 0, 00:08:00.091 "ns_manage": 0 00:08:00.091 }, 00:08:00.091 "multi_ctrlr": true, 00:08:00.091 "ana_reporting": false 00:08:00.091 }, 00:08:00.091 "vs": { 00:08:00.091 "nvme_version": "1.3" 00:08:00.091 }, 00:08:00.091 "ns_data": { 00:08:00.091 "id": 1, 00:08:00.091 "can_share": true 00:08:00.091 } 00:08:00.091 } 00:08:00.091 ], 00:08:00.091 "mp_policy": "active_passive" 00:08:00.091 } 00:08:00.091 } 00:08:00.091 ] 00:08:00.091 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.091 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64169 00:08:00.091 14:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.091 Running I/O for 10 seconds... 00:08:01.472 Latency(us) 00:08:01.472 [2024-12-10T14:13:26.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.472 Nvme0n1 : 1.00 6509.00 25.43 0.00 0.00 0.00 0.00 0.00 00:08:01.472 [2024-12-10T14:13:26.309Z] =================================================================================================================== 00:08:01.472 [2024-12-10T14:13:26.309Z] Total : 6509.00 25.43 0.00 0.00 0.00 0.00 0.00 00:08:01.472 00:08:02.040 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:02.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.298 Nvme0n1 : 2.00 6429.50 25.12 0.00 0.00 0.00 0.00 0.00 00:08:02.298 [2024-12-10T14:13:27.135Z] =================================================================================================================== 00:08:02.298 [2024-12-10T14:13:27.135Z] Total : 6429.50 25.12 0.00 0.00 0.00 0.00 0.00 00:08:02.298 00:08:02.298 true 00:08:02.298 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:02.298 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.865 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.865 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.865 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64169 00:08:03.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.133 Nvme0n1 : 3.00 6530.00 25.51 0.00 0.00 0.00 0.00 0.00 00:08:03.133 [2024-12-10T14:13:27.970Z] =================================================================================================================== 00:08:03.133 [2024-12-10T14:13:27.970Z] Total : 6530.00 25.51 0.00 0.00 0.00 0.00 0.00 00:08:03.133 00:08:04.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.089 Nvme0n1 : 4.00 6516.75 25.46 0.00 0.00 0.00 0.00 0.00 00:08:04.089 [2024-12-10T14:13:28.926Z] =================================================================================================================== 00:08:04.089 [2024-12-10T14:13:28.926Z] Total : 6516.75 25.46 0.00 0.00 0.00 0.00 0.00 00:08:04.089 00:08:05.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.465 Nvme0n1 : 5.00 6534.20 25.52 0.00 0.00 0.00 0.00 0.00 00:08:05.465 [2024-12-10T14:13:30.302Z] =================================================================================================================== 00:08:05.465 [2024-12-10T14:13:30.302Z] Total : 6534.20 25.52 0.00 0.00 0.00 0.00 0.00 00:08:05.465 00:08:06.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.401 Nvme0n1 : 6.00 6452.33 25.20 0.00 0.00 0.00 0.00 0.00 00:08:06.401 [2024-12-10T14:13:31.238Z] =================================================================================================================== 00:08:06.401 [2024-12-10T14:13:31.238Z] Total : 6452.33 25.20 0.00 0.00 0.00 0.00 0.00 00:08:06.401 00:08:07.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.338 Nvme0n1 : 7.00 6437.71 25.15 0.00 0.00 0.00 0.00 0.00 00:08:07.338 [2024-12-10T14:13:32.175Z] =================================================================================================================== 00:08:07.338 [2024-12-10T14:13:32.175Z] Total : 6437.71 25.15 0.00 0.00 0.00 0.00 0.00 00:08:07.338 00:08:08.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.275 Nvme0n1 : 8.00 6410.88 25.04 0.00 0.00 0.00 0.00 0.00 00:08:08.275 [2024-12-10T14:13:33.112Z] =================================================================================================================== 00:08:08.275 [2024-12-10T14:13:33.112Z] Total : 6410.88 25.04 0.00 0.00 0.00 0.00 0.00 00:08:08.275 00:08:09.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.212 Nvme0n1 : 9.00 6404.11 25.02 0.00 0.00 0.00 0.00 0.00 00:08:09.212 [2024-12-10T14:13:34.049Z] =================================================================================================================== 00:08:09.212 [2024-12-10T14:13:34.049Z] Total : 6404.11 25.02 0.00 0.00 0.00 0.00 0.00 00:08:09.212 00:08:10.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.150 Nvme0n1 : 10.00 6386.00 24.95 0.00 0.00 0.00 0.00 0.00 00:08:10.150 [2024-12-10T14:13:34.987Z] =================================================================================================================== 00:08:10.150 [2024-12-10T14:13:34.987Z] Total : 6386.00 24.95 0.00 0.00 0.00 0.00 0.00 00:08:10.150 00:08:10.150 00:08:10.150 Latency(us) 00:08:10.150 [2024-12-10T14:13:34.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.150 Nvme0n1 : 10.00 6396.73 24.99 0.00 0.00 20004.51 8519.68 114866.73 00:08:10.150 [2024-12-10T14:13:34.987Z] =================================================================================================================== 00:08:10.150 [2024-12-10T14:13:34.987Z] Total : 6396.73 24.99 0.00 0.00 20004.51 8519.68 114866.73 00:08:10.150 { 00:08:10.150 "results": [ 00:08:10.150 { 00:08:10.150 "job": "Nvme0n1", 00:08:10.150 "core_mask": "0x2", 00:08:10.150 "workload": "randwrite", 00:08:10.150 "status": "finished", 00:08:10.150 "queue_depth": 128, 00:08:10.150 "io_size": 4096, 00:08:10.150 "runtime": 10.003241, 00:08:10.150 "iops": 6396.726820837366, 00:08:10.150 "mibps": 24.98721414389596, 00:08:10.151 "io_failed": 0, 00:08:10.151 "io_timeout": 0, 00:08:10.151 "avg_latency_us": 20004.505346570662, 00:08:10.151 "min_latency_us": 8519.68, 00:08:10.151 "max_latency_us": 114866.73454545454 00:08:10.151 } 00:08:10.151 ], 00:08:10.151 "core_count": 1 00:08:10.151 } 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64146 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 64146 ']' 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 64146 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64146 00:08:10.151 killing process with pid 64146 00:08:10.151 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.151 00:08:10.151 Latency(us) 00:08:10.151 [2024-12-10T14:13:34.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.151 [2024-12-10T14:13:34.988Z] =================================================================================================================== 00:08:10.151 [2024-12-10T14:13:34.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64146' 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 64146 00:08:10.151 14:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 64146 00:08:10.410 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:10.669 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.927 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:10.927 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:11.186 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:11.186 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:11.186 14:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.445 [2024-12-10 14:13:36.157582] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:11.445 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:11.704 request: 00:08:11.704 { 00:08:11.704 "uuid": "9831a038-6a0c-4c49-8332-367b1e94aec9", 00:08:11.704 "method": "bdev_lvol_get_lvstores", 00:08:11.704 "req_id": 1 00:08:11.704 } 00:08:11.704 Got JSON-RPC error response 00:08:11.704 response: 00:08:11.704 { 00:08:11.704 "code": -19, 00:08:11.704 "message": "No such device" 00:08:11.704 } 00:08:11.704 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:11.704 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.704 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.704 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.704 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.963 aio_bdev 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2026e9fc-bb89-4577-bc19-4db671399741 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2026e9fc-bb89-4577-bc19-4db671399741 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.963 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.222 14:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2026e9fc-bb89-4577-bc19-4db671399741 -t 2000 00:08:12.481 [ 00:08:12.481 { 00:08:12.481 "name": "2026e9fc-bb89-4577-bc19-4db671399741", 00:08:12.481 "aliases": [ 00:08:12.481 "lvs/lvol" 00:08:12.481 ], 00:08:12.481 "product_name": "Logical Volume", 00:08:12.481 "block_size": 4096, 00:08:12.481 "num_blocks": 38912, 00:08:12.481 "uuid": "2026e9fc-bb89-4577-bc19-4db671399741", 00:08:12.481 "assigned_rate_limits": { 00:08:12.481 "rw_ios_per_sec": 0, 00:08:12.481 "rw_mbytes_per_sec": 0, 00:08:12.481 "r_mbytes_per_sec": 0, 00:08:12.481 "w_mbytes_per_sec": 0 00:08:12.481 }, 00:08:12.481 "claimed": false, 00:08:12.481 "zoned": false, 00:08:12.481 "supported_io_types": { 00:08:12.481 "read": true, 00:08:12.481 "write": true, 00:08:12.481 "unmap": true, 00:08:12.481 "flush": false, 00:08:12.481 "reset": true, 00:08:12.481 "nvme_admin": false, 00:08:12.481 "nvme_io": false, 00:08:12.481 "nvme_io_md": false, 00:08:12.481 "write_zeroes": true, 00:08:12.481 "zcopy": false, 00:08:12.481 "get_zone_info": false, 00:08:12.481 "zone_management": false, 00:08:12.481 "zone_append": false, 00:08:12.481 "compare": false, 00:08:12.481 "compare_and_write": false, 00:08:12.481 "abort": false, 00:08:12.481 "seek_hole": true, 00:08:12.481 "seek_data": true, 00:08:12.481 "copy": false, 00:08:12.481 "nvme_iov_md": false 00:08:12.481 }, 00:08:12.481 "driver_specific": { 00:08:12.481 "lvol": { 00:08:12.481 "lvol_store_uuid": "9831a038-6a0c-4c49-8332-367b1e94aec9", 00:08:12.481 "base_bdev": "aio_bdev", 00:08:12.481 "thin_provision": false, 00:08:12.481 "num_allocated_clusters": 38, 00:08:12.481 "snapshot": false, 00:08:12.481 "clone": false, 00:08:12.481 "esnap_clone": false 00:08:12.481 } 00:08:12.481 } 00:08:12.481 } 00:08:12.481 ] 00:08:12.481 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:12.481 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:12.481 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.740 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.740 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:12.740 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:13.308 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:13.308 14:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2026e9fc-bb89-4577-bc19-4db671399741 00:08:13.308 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9831a038-6a0c-4c49-8332-367b1e94aec9 00:08:13.576 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.880 14:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.458 ************************************ 00:08:14.458 END TEST lvs_grow_clean 00:08:14.458 ************************************ 00:08:14.458 00:08:14.458 real 0m18.572s 00:08:14.458 user 0m17.713s 00:08:14.458 sys 0m2.405s 00:08:14.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:14.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:14.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.458 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.459 ************************************ 00:08:14.459 START TEST lvs_grow_dirty 00:08:14.459 ************************************ 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.459 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.717 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.717 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.976 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:14.976 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:14.976 14:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.235 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.235 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.235 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e63d3c52-5941-400a-861e-8c4cd50237e0 lvol 150 00:08:15.495 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d0e93e75-4618-477d-9141-53d349eb634f 00:08:15.495 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.495 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.754 [2024-12-10 14:13:40.539637] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.754 [2024-12-10 14:13:40.539728] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.754 true 00:08:15.754 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:15.754 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:16.013 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:16.013 14:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.272 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d0e93e75-4618-477d-9141-53d349eb634f 00:08:16.531 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:16.790 [2024-12-10 14:13:41.556158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:16.790 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64425 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64425 /var/tmp/bdevperf.sock 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64425 ']' 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.050 14:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.050 [2024-12-10 14:13:41.855493] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:17.050 [2024-12-10 14:13:41.855588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64425 ] 00:08:17.309 [2024-12-10 14:13:42.003329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.309 [2024-12-10 14:13:42.043898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.309 [2024-12-10 14:13:42.077720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.309 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.309 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:17.309 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.878 Nvme0n1 00:08:17.878 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.136 [ 00:08:18.136 { 00:08:18.136 "name": "Nvme0n1", 00:08:18.136 "aliases": [ 00:08:18.137 "d0e93e75-4618-477d-9141-53d349eb634f" 00:08:18.137 ], 00:08:18.137 "product_name": "NVMe disk", 00:08:18.137 "block_size": 4096, 00:08:18.137 "num_blocks": 38912, 00:08:18.137 "uuid": "d0e93e75-4618-477d-9141-53d349eb634f", 00:08:18.137 "numa_id": -1, 00:08:18.137 "assigned_rate_limits": { 00:08:18.137 "rw_ios_per_sec": 0, 00:08:18.137 "rw_mbytes_per_sec": 0, 00:08:18.137 "r_mbytes_per_sec": 0, 00:08:18.137 "w_mbytes_per_sec": 0 00:08:18.137 }, 00:08:18.137 "claimed": false, 00:08:18.137 "zoned": false, 00:08:18.137 "supported_io_types": { 00:08:18.137 "read": true, 00:08:18.137 "write": true, 00:08:18.137 "unmap": true, 00:08:18.137 "flush": true, 00:08:18.137 "reset": true, 00:08:18.137 "nvme_admin": true, 00:08:18.137 "nvme_io": true, 00:08:18.137 "nvme_io_md": false, 00:08:18.137 "write_zeroes": true, 00:08:18.137 "zcopy": false, 00:08:18.137 "get_zone_info": false, 00:08:18.137 "zone_management": false, 00:08:18.137 "zone_append": false, 00:08:18.137 "compare": true, 00:08:18.137 "compare_and_write": true, 00:08:18.137 "abort": true, 00:08:18.137 "seek_hole": false, 00:08:18.137 "seek_data": false, 00:08:18.137 "copy": true, 00:08:18.137 "nvme_iov_md": false 00:08:18.137 }, 00:08:18.137 "memory_domains": [ 00:08:18.137 { 00:08:18.137 "dma_device_id": "system", 00:08:18.137 "dma_device_type": 1 00:08:18.137 } 00:08:18.137 ], 00:08:18.137 "driver_specific": { 00:08:18.137 "nvme": [ 00:08:18.137 { 00:08:18.137 "trid": { 00:08:18.137 "trtype": "TCP", 00:08:18.137 "adrfam": "IPv4", 00:08:18.137 "traddr": "10.0.0.3", 00:08:18.137 "trsvcid": "4420", 00:08:18.137 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.137 }, 00:08:18.137 "ctrlr_data": { 00:08:18.137 "cntlid": 1, 00:08:18.137 "vendor_id": "0x8086", 00:08:18.137 "model_number": "SPDK bdev Controller", 00:08:18.137 "serial_number": "SPDK0", 00:08:18.137 "firmware_revision": "25.01", 00:08:18.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.137 "oacs": { 00:08:18.137 "security": 0, 00:08:18.137 "format": 0, 00:08:18.137 "firmware": 0, 00:08:18.137 "ns_manage": 0 00:08:18.137 }, 00:08:18.137 "multi_ctrlr": true, 00:08:18.137 "ana_reporting": false 00:08:18.137 }, 00:08:18.137 "vs": { 00:08:18.137 "nvme_version": "1.3" 00:08:18.137 }, 00:08:18.137 "ns_data": { 00:08:18.137 "id": 1, 00:08:18.137 "can_share": true 00:08:18.137 } 00:08:18.137 } 00:08:18.137 ], 00:08:18.137 "mp_policy": "active_passive" 00:08:18.137 } 00:08:18.137 } 00:08:18.137 ] 00:08:18.137 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64441 00:08:18.137 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.137 14:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.137 Running I/O for 10 seconds... 00:08:19.074 Latency(us) 00:08:19.074 [2024-12-10T14:13:43.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.074 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:19.074 [2024-12-10T14:13:43.911Z] =================================================================================================================== 00:08:19.074 [2024-12-10T14:13:43.911Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:19.074 00:08:20.018 14:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:20.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.277 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:20.277 [2024-12-10T14:13:45.114Z] =================================================================================================================== 00:08:20.277 [2024-12-10T14:13:45.114Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:20.277 00:08:20.277 true 00:08:20.535 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.535 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:20.794 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.794 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.794 14:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 64441 00:08:21.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.053 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:21.053 [2024-12-10T14:13:45.890Z] =================================================================================================================== 00:08:21.053 [2024-12-10T14:13:45.890Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:21.053 00:08:22.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.430 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:22.430 [2024-12-10T14:13:47.267Z] =================================================================================================================== 00:08:22.430 [2024-12-10T14:13:47.267Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:22.430 00:08:23.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.367 Nvme0n1 : 5.00 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:08:23.367 [2024-12-10T14:13:48.204Z] =================================================================================================================== 00:08:23.367 [2024-12-10T14:13:48.204Z] Total : 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:08:23.367 00:08:24.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.304 Nvme0n1 : 6.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:24.304 [2024-12-10T14:13:49.141Z] =================================================================================================================== 00:08:24.304 [2024-12-10T14:13:49.141Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:24.304 00:08:25.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.245 Nvme0n1 : 7.00 6658.43 26.01 0.00 0.00 0.00 0.00 0.00 00:08:25.245 [2024-12-10T14:13:50.082Z] =================================================================================================================== 00:08:25.245 [2024-12-10T14:13:50.082Z] Total : 6658.43 26.01 0.00 0.00 0.00 0.00 0.00 00:08:25.245 00:08:26.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.181 Nvme0n1 : 8.00 6478.12 25.31 0.00 0.00 0.00 0.00 0.00 00:08:26.181 [2024-12-10T14:13:51.018Z] =================================================================================================================== 00:08:26.181 [2024-12-10T14:13:51.018Z] Total : 6478.12 25.31 0.00 0.00 0.00 0.00 0.00 00:08:26.181 00:08:27.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.117 Nvme0n1 : 9.00 6449.78 25.19 0.00 0.00 0.00 0.00 0.00 00:08:27.118 [2024-12-10T14:13:51.955Z] =================================================================================================================== 00:08:27.118 [2024-12-10T14:13:51.955Z] Total : 6449.78 25.19 0.00 0.00 0.00 0.00 0.00 00:08:27.118 00:08:28.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.053 Nvme0n1 : 10.00 6427.10 25.11 0.00 0.00 0.00 0.00 0.00 00:08:28.053 [2024-12-10T14:13:52.890Z] =================================================================================================================== 00:08:28.053 [2024-12-10T14:13:52.890Z] Total : 6427.10 25.11 0.00 0.00 0.00 0.00 0.00 00:08:28.053 00:08:28.053 00:08:28.053 Latency(us) 00:08:28.053 [2024-12-10T14:13:52.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.053 Nvme0n1 : 10.01 6435.08 25.14 0.00 0.00 19886.66 7566.43 218294.46 00:08:28.053 [2024-12-10T14:13:52.890Z] =================================================================================================================== 00:08:28.053 [2024-12-10T14:13:52.890Z] Total : 6435.08 25.14 0.00 0.00 19886.66 7566.43 218294.46 00:08:28.053 { 00:08:28.053 "results": [ 00:08:28.053 { 00:08:28.053 "job": "Nvme0n1", 00:08:28.053 "core_mask": "0x2", 00:08:28.053 "workload": "randwrite", 00:08:28.053 "status": "finished", 00:08:28.053 "queue_depth": 128, 00:08:28.053 "io_size": 4096, 00:08:28.053 "runtime": 10.007492, 00:08:28.053 "iops": 6435.07883893387, 00:08:28.053 "mibps": 25.13702671458543, 00:08:28.053 "io_failed": 0, 00:08:28.053 "io_timeout": 0, 00:08:28.053 "avg_latency_us": 19886.658704864138, 00:08:28.053 "min_latency_us": 7566.4290909090905, 00:08:28.053 "max_latency_us": 218294.4581818182 00:08:28.053 } 00:08:28.053 ], 00:08:28.053 "core_count": 1 00:08:28.053 } 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64425 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 64425 ']' 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 64425 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64425 00:08:28.312 killing process with pid 64425 00:08:28.312 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.312 00:08:28.312 Latency(us) 00:08:28.312 [2024-12-10T14:13:53.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.312 [2024-12-10T14:13:53.149Z] =================================================================================================================== 00:08:28.312 [2024-12-10T14:13:53.149Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64425' 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 64425 00:08:28.312 14:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 64425 00:08:28.312 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:28.570 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.829 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:28.829 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64069 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64069 00:08:29.397 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64069 Killed "${NVMF_APP[@]}" "$@" 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64574 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64574 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64574 ']' 00:08:29.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.397 14:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.397 [2024-12-10 14:13:54.033109] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:29.397 [2024-12-10 14:13:54.033573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.397 [2024-12-10 14:13:54.181526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.397 [2024-12-10 14:13:54.213701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.397 [2024-12-10 14:13:54.213903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.397 [2024-12-10 14:13:54.214102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.397 [2024-12-10 14:13:54.214116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.397 [2024-12-10 14:13:54.214124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.397 [2024-12-10 14:13:54.214475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.657 [2024-12-10 14:13:54.244802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.223 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.223 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:30.223 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.223 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.223 14:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.223 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.223 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.482 [2024-12-10 14:13:55.278162] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:30.482 [2024-12-10 14:13:55.278588] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:30.482 [2024-12-10 14:13:55.279034] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d0e93e75-4618-477d-9141-53d349eb634f 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d0e93e75-4618-477d-9141-53d349eb634f 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.740 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0e93e75-4618-477d-9141-53d349eb634f -t 2000 00:08:30.999 [ 00:08:30.999 { 00:08:30.999 "name": "d0e93e75-4618-477d-9141-53d349eb634f", 00:08:30.999 "aliases": [ 00:08:30.999 "lvs/lvol" 00:08:30.999 ], 00:08:30.999 "product_name": "Logical Volume", 00:08:30.999 "block_size": 4096, 00:08:30.999 "num_blocks": 38912, 00:08:30.999 "uuid": "d0e93e75-4618-477d-9141-53d349eb634f", 00:08:30.999 "assigned_rate_limits": { 00:08:30.999 "rw_ios_per_sec": 0, 00:08:30.999 "rw_mbytes_per_sec": 0, 00:08:30.999 "r_mbytes_per_sec": 0, 00:08:30.999 "w_mbytes_per_sec": 0 00:08:30.999 }, 00:08:30.999 "claimed": false, 00:08:30.999 "zoned": false, 00:08:30.999 "supported_io_types": { 00:08:30.999 "read": true, 00:08:30.999 "write": true, 00:08:30.999 "unmap": true, 00:08:30.999 "flush": false, 00:08:30.999 "reset": true, 00:08:30.999 "nvme_admin": false, 00:08:30.999 "nvme_io": false, 00:08:30.999 "nvme_io_md": false, 00:08:30.999 "write_zeroes": true, 00:08:30.999 "zcopy": false, 00:08:30.999 "get_zone_info": false, 00:08:30.999 "zone_management": false, 00:08:30.999 "zone_append": false, 00:08:30.999 "compare": false, 00:08:30.999 "compare_and_write": false, 00:08:30.999 "abort": false, 00:08:30.999 "seek_hole": true, 00:08:30.999 "seek_data": true, 00:08:30.999 "copy": false, 00:08:30.999 "nvme_iov_md": false 00:08:30.999 }, 00:08:30.999 "driver_specific": { 00:08:30.999 "lvol": { 00:08:30.999 "lvol_store_uuid": "e63d3c52-5941-400a-861e-8c4cd50237e0", 00:08:30.999 "base_bdev": "aio_bdev", 00:08:30.999 "thin_provision": false, 00:08:30.999 "num_allocated_clusters": 38, 00:08:30.999 "snapshot": false, 00:08:31.000 "clone": false, 00:08:31.000 "esnap_clone": false 00:08:31.000 } 00:08:31.000 } 00:08:31.000 } 00:08:31.000 ] 00:08:31.000 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:31.000 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:31.000 14:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:31.259 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:31.259 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:31.259 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:31.827 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:31.827 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.827 [2024-12-10 14:13:56.640110] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:32.087 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:32.087 request: 00:08:32.087 { 00:08:32.087 "uuid": "e63d3c52-5941-400a-861e-8c4cd50237e0", 00:08:32.087 "method": "bdev_lvol_get_lvstores", 00:08:32.087 "req_id": 1 00:08:32.087 } 00:08:32.087 Got JSON-RPC error response 00:08:32.087 response: 00:08:32.087 { 00:08:32.087 "code": -19, 00:08:32.087 "message": "No such device" 00:08:32.087 } 00:08:32.347 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:32.347 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.347 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.347 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.347 14:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.347 aio_bdev 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d0e93e75-4618-477d-9141-53d349eb634f 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d0e93e75-4618-477d-9141-53d349eb634f 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.347 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.606 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0e93e75-4618-477d-9141-53d349eb634f -t 2000 00:08:32.864 [ 00:08:32.864 { 00:08:32.864 "name": "d0e93e75-4618-477d-9141-53d349eb634f", 00:08:32.864 "aliases": [ 00:08:32.864 "lvs/lvol" 00:08:32.864 ], 00:08:32.864 "product_name": "Logical Volume", 00:08:32.864 "block_size": 4096, 00:08:32.864 "num_blocks": 38912, 00:08:32.864 "uuid": "d0e93e75-4618-477d-9141-53d349eb634f", 00:08:32.864 "assigned_rate_limits": { 00:08:32.864 "rw_ios_per_sec": 0, 00:08:32.864 "rw_mbytes_per_sec": 0, 00:08:32.864 "r_mbytes_per_sec": 0, 00:08:32.864 "w_mbytes_per_sec": 0 00:08:32.864 }, 00:08:32.864 "claimed": false, 00:08:32.864 "zoned": false, 00:08:32.864 "supported_io_types": { 00:08:32.864 "read": true, 00:08:32.864 "write": true, 00:08:32.864 "unmap": true, 00:08:32.864 "flush": false, 00:08:32.864 "reset": true, 00:08:32.864 "nvme_admin": false, 00:08:32.864 "nvme_io": false, 00:08:32.864 "nvme_io_md": false, 00:08:32.864 "write_zeroes": true, 00:08:32.864 "zcopy": false, 00:08:32.864 "get_zone_info": false, 00:08:32.864 "zone_management": false, 00:08:32.864 "zone_append": false, 00:08:32.864 "compare": false, 00:08:32.864 "compare_and_write": false, 00:08:32.864 "abort": false, 00:08:32.864 "seek_hole": true, 00:08:32.864 "seek_data": true, 00:08:32.864 "copy": false, 00:08:32.864 "nvme_iov_md": false 00:08:32.864 }, 00:08:32.864 "driver_specific": { 00:08:32.864 "lvol": { 00:08:32.864 "lvol_store_uuid": "e63d3c52-5941-400a-861e-8c4cd50237e0", 00:08:32.864 "base_bdev": "aio_bdev", 00:08:32.864 "thin_provision": false, 00:08:32.864 "num_allocated_clusters": 38, 00:08:32.864 "snapshot": false, 00:08:32.864 "clone": false, 00:08:32.864 "esnap_clone": false 00:08:32.864 } 00:08:32.864 } 00:08:32.864 } 00:08:32.864 ] 00:08:32.864 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:32.864 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:32.864 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.123 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.123 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:33.123 14:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.381 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.381 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d0e93e75-4618-477d-9141-53d349eb634f 00:08:33.640 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e63d3c52-5941-400a-861e-8c4cd50237e0 00:08:33.899 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.157 14:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.725 ************************************ 00:08:34.725 END TEST lvs_grow_dirty 00:08:34.725 ************************************ 00:08:34.725 00:08:34.725 real 0m20.271s 00:08:34.725 user 0m40.350s 00:08:34.725 sys 0m9.508s 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:34.725 nvmf_trace.0 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.725 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.016 rmmod nvme_tcp 00:08:35.016 rmmod nvme_fabrics 00:08:35.016 rmmod nvme_keyring 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64574 ']' 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64574 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64574 ']' 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64574 00:08:35.016 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64574 00:08:35.278 killing process with pid 64574 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64574' 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64574 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64574 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:35.278 14:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:35.278 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:35.279 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:35.537 ************************************ 00:08:35.537 END TEST nvmf_lvs_grow 00:08:35.537 ************************************ 00:08:35.537 00:08:35.537 real 0m41.158s 00:08:35.537 user 1m4.754s 00:08:35.537 sys 0m12.874s 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 ************************************ 00:08:35.537 START TEST nvmf_bdev_io_wait 00:08:35.537 ************************************ 00:08:35.537 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:35.537 * Looking for test storage... 00:08:35.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.797 --rc genhtml_branch_coverage=1 00:08:35.797 --rc genhtml_function_coverage=1 00:08:35.797 --rc genhtml_legend=1 00:08:35.797 --rc geninfo_all_blocks=1 00:08:35.797 --rc geninfo_unexecuted_blocks=1 00:08:35.797 00:08:35.797 ' 00:08:35.797 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.797 --rc genhtml_branch_coverage=1 00:08:35.797 --rc genhtml_function_coverage=1 00:08:35.797 --rc genhtml_legend=1 00:08:35.797 --rc geninfo_all_blocks=1 00:08:35.797 --rc geninfo_unexecuted_blocks=1 00:08:35.797 00:08:35.797 ' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.798 --rc genhtml_branch_coverage=1 00:08:35.798 --rc genhtml_function_coverage=1 00:08:35.798 --rc genhtml_legend=1 00:08:35.798 --rc geninfo_all_blocks=1 00:08:35.798 --rc geninfo_unexecuted_blocks=1 00:08:35.798 00:08:35.798 ' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.798 --rc genhtml_branch_coverage=1 00:08:35.798 --rc genhtml_function_coverage=1 00:08:35.798 --rc genhtml_legend=1 00:08:35.798 --rc geninfo_all_blocks=1 00:08:35.798 --rc geninfo_unexecuted_blocks=1 00:08:35.798 00:08:35.798 ' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.798 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:35.798 Cannot find device "nvmf_init_br" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:35.798 Cannot find device "nvmf_init_br2" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:35.798 Cannot find device "nvmf_tgt_br" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.798 Cannot find device "nvmf_tgt_br2" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:35.798 Cannot find device "nvmf_init_br" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:35.798 Cannot find device "nvmf_init_br2" 00:08:35.798 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:35.799 Cannot find device "nvmf_tgt_br" 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:35.799 Cannot find device "nvmf_tgt_br2" 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:35.799 Cannot find device "nvmf_br" 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:35.799 Cannot find device "nvmf_init_if" 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:35.799 Cannot find device "nvmf_init_if2" 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:35.799 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:36.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:36.058 00:08:36.058 --- 10.0.0.3 ping statistics --- 00:08:36.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.058 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:36.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:36.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:08:36.058 00:08:36.058 --- 10.0.0.4 ping statistics --- 00:08:36.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.058 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:36.058 00:08:36.058 --- 10.0.0.1 ping statistics --- 00:08:36.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.058 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:36.058 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:36.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:36.059 00:08:36.059 --- 10.0.0.2 ping statistics --- 00:08:36.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.059 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64946 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64946 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64946 ']' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.059 14:14:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.318 [2024-12-10 14:14:00.950250] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:36.318 [2024-12-10 14:14:00.950353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.318 [2024-12-10 14:14:01.091296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.318 [2024-12-10 14:14:01.123100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.318 [2024-12-10 14:14:01.123163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.318 [2024-12-10 14:14:01.123172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.318 [2024-12-10 14:14:01.123179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.318 [2024-12-10 14:14:01.123186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.318 [2024-12-10 14:14:01.123912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.318 [2024-12-10 14:14:01.124111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.318 [2024-12-10 14:14:01.124751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.318 [2024-12-10 14:14:01.124832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 [2024-12-10 14:14:02.004400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 [2024-12-10 14:14:02.019365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 Malloc0 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.255 [2024-12-10 14:14:02.070315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64981 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64983 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.255 { 00:08:37.255 "params": { 00:08:37.255 "name": "Nvme$subsystem", 00:08:37.255 "trtype": "$TEST_TRANSPORT", 00:08:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.255 "adrfam": "ipv4", 00:08:37.255 "trsvcid": "$NVMF_PORT", 00:08:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.255 "hdgst": ${hdgst:-false}, 00:08:37.255 "ddgst": ${ddgst:-false} 00:08:37.255 }, 00:08:37.255 "method": "bdev_nvme_attach_controller" 00:08:37.255 } 00:08:37.255 EOF 00:08:37.255 )") 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64985 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.255 { 00:08:37.255 "params": { 00:08:37.255 "name": "Nvme$subsystem", 00:08:37.255 "trtype": "$TEST_TRANSPORT", 00:08:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.255 "adrfam": "ipv4", 00:08:37.255 "trsvcid": "$NVMF_PORT", 00:08:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.255 "hdgst": ${hdgst:-false}, 00:08:37.255 "ddgst": ${ddgst:-false} 00:08:37.255 }, 00:08:37.255 "method": "bdev_nvme_attach_controller" 00:08:37.255 } 00:08:37.255 EOF 00:08:37.255 )") 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64989 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:37.255 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.255 { 00:08:37.255 "params": { 00:08:37.255 "name": "Nvme$subsystem", 00:08:37.255 "trtype": "$TEST_TRANSPORT", 00:08:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.256 "adrfam": "ipv4", 00:08:37.256 "trsvcid": "$NVMF_PORT", 00:08:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.256 "hdgst": ${hdgst:-false}, 00:08:37.256 "ddgst": ${ddgst:-false} 00:08:37.256 }, 00:08:37.256 "method": "bdev_nvme_attach_controller" 00:08:37.256 } 00:08:37.256 EOF 00:08:37.256 )") 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.256 "params": { 00:08:37.256 "name": "Nvme1", 00:08:37.256 "trtype": "tcp", 00:08:37.256 "traddr": "10.0.0.3", 00:08:37.256 "adrfam": "ipv4", 00:08:37.256 "trsvcid": "4420", 00:08:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.256 "hdgst": false, 00:08:37.256 "ddgst": false 00:08:37.256 }, 00:08:37.256 "method": "bdev_nvme_attach_controller" 00:08:37.256 }' 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.256 "params": { 00:08:37.256 "name": "Nvme1", 00:08:37.256 "trtype": "tcp", 00:08:37.256 "traddr": "10.0.0.3", 00:08:37.256 "adrfam": "ipv4", 00:08:37.256 "trsvcid": "4420", 00:08:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.256 "hdgst": false, 00:08:37.256 "ddgst": false 00:08:37.256 }, 00:08:37.256 "method": "bdev_nvme_attach_controller" 00:08:37.256 }' 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.256 { 00:08:37.256 "params": { 00:08:37.256 "name": "Nvme$subsystem", 00:08:37.256 "trtype": "$TEST_TRANSPORT", 00:08:37.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.256 "adrfam": "ipv4", 00:08:37.256 "trsvcid": "$NVMF_PORT", 00:08:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.256 "hdgst": ${hdgst:-false}, 00:08:37.256 "ddgst": ${ddgst:-false} 00:08:37.256 }, 00:08:37.256 "method": "bdev_nvme_attach_controller" 00:08:37.256 } 00:08:37.256 EOF 00:08:37.256 )") 00:08:37.256 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.514 "params": { 00:08:37.514 "name": "Nvme1", 00:08:37.514 "trtype": "tcp", 00:08:37.514 "traddr": "10.0.0.3", 00:08:37.514 "adrfam": "ipv4", 00:08:37.514 "trsvcid": "4420", 00:08:37.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.514 "hdgst": false, 00:08:37.514 "ddgst": false 00:08:37.514 }, 00:08:37.514 "method": "bdev_nvme_attach_controller" 00:08:37.514 }' 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.514 "params": { 00:08:37.514 "name": "Nvme1", 00:08:37.514 "trtype": "tcp", 00:08:37.514 "traddr": "10.0.0.3", 00:08:37.514 "adrfam": "ipv4", 00:08:37.514 "trsvcid": "4420", 00:08:37.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.514 "hdgst": false, 00:08:37.514 "ddgst": false 00:08:37.514 }, 00:08:37.514 "method": "bdev_nvme_attach_controller" 00:08:37.514 }' 00:08:37.514 14:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64981 00:08:37.514 [2024-12-10 14:14:02.152634] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:37.515 [2024-12-10 14:14:02.152718] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:37.515 [2024-12-10 14:14:02.154444] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:37.515 [2024-12-10 14:14:02.154519] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:37.515 [2024-12-10 14:14:02.165879] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:37.515 [2024-12-10 14:14:02.165994] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:37.515 [2024-12-10 14:14:02.170150] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:37.515 [2024-12-10 14:14:02.170251] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:37.515 [2024-12-10 14:14:02.329351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.774 [2024-12-10 14:14:02.356442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.774 [2024-12-10 14:14:02.369262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.774 [2024-12-10 14:14:02.375464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.774 [2024-12-10 14:14:02.406379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:37.774 [2024-12-10 14:14:02.420513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.774 [2024-12-10 14:14:02.421065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.774 [2024-12-10 14:14:02.451997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:37.774 [2024-12-10 14:14:02.463043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.774 [2024-12-10 14:14:02.465943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.774 Running I/O for 1 seconds... 00:08:37.774 [2024-12-10 14:14:02.493609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:37.774 [2024-12-10 14:14:02.507356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.774 Running I/O for 1 seconds... 00:08:37.774 Running I/O for 1 seconds... 00:08:38.033 Running I/O for 1 seconds... 00:08:38.969 9271.00 IOPS, 36.21 MiB/s 00:08:38.969 Latency(us) 00:08:38.969 [2024-12-10T14:14:03.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.969 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:38.969 Nvme1n1 : 1.01 9321.67 36.41 0.00 0.00 13666.48 7149.38 21328.99 00:08:38.969 [2024-12-10T14:14:03.806Z] =================================================================================================================== 00:08:38.969 [2024-12-10T14:14:03.806Z] Total : 9321.67 36.41 0.00 0.00 13666.48 7149.38 21328.99 00:08:38.969 7570.00 IOPS, 29.57 MiB/s 00:08:38.969 Latency(us) 00:08:38.969 [2024-12-10T14:14:03.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.969 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:38.969 Nvme1n1 : 1.01 7614.01 29.74 0.00 0.00 16713.79 9830.40 25737.77 00:08:38.969 [2024-12-10T14:14:03.806Z] =================================================================================================================== 00:08:38.969 [2024-12-10T14:14:03.806Z] Total : 7614.01 29.74 0.00 0.00 16713.79 9830.40 25737.77 00:08:38.969 8577.00 IOPS, 33.50 MiB/s 00:08:38.969 Latency(us) 00:08:38.969 [2024-12-10T14:14:03.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.969 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:38.969 Nvme1n1 : 1.01 8658.35 33.82 0.00 0.00 14723.39 6464.23 27763.43 00:08:38.969 [2024-12-10T14:14:03.806Z] =================================================================================================================== 00:08:38.969 [2024-12-10T14:14:03.806Z] Total : 8658.35 33.82 0.00 0.00 14723.39 6464.23 27763.43 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64983 00:08:38.969 162872.00 IOPS, 636.22 MiB/s 00:08:38.969 Latency(us) 00:08:38.969 [2024-12-10T14:14:03.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.969 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:38.969 Nvme1n1 : 1.00 162532.02 634.89 0.00 0.00 783.39 366.78 2055.45 00:08:38.969 [2024-12-10T14:14:03.806Z] =================================================================================================================== 00:08:38.969 [2024-12-10T14:14:03.806Z] Total : 162532.02 634.89 0.00 0.00 783.39 366.78 2055.45 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64985 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64989 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.969 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.229 rmmod nvme_tcp 00:08:39.229 rmmod nvme_fabrics 00:08:39.229 rmmod nvme_keyring 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64946 ']' 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64946 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64946 ']' 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64946 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64946 00:08:39.229 killing process with pid 64946 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64946' 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64946 00:08:39.229 14:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64946 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:39.229 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:39.488 00:08:39.488 real 0m3.955s 00:08:39.488 user 0m15.901s 00:08:39.488 sys 0m2.158s 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.488 ************************************ 00:08:39.488 END TEST nvmf_bdev_io_wait 00:08:39.488 ************************************ 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.488 ************************************ 00:08:39.488 START TEST nvmf_queue_depth 00:08:39.488 ************************************ 00:08:39.488 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.748 * Looking for test storage... 00:08:39.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.748 --rc genhtml_branch_coverage=1 00:08:39.748 --rc genhtml_function_coverage=1 00:08:39.748 --rc genhtml_legend=1 00:08:39.748 --rc geninfo_all_blocks=1 00:08:39.748 --rc geninfo_unexecuted_blocks=1 00:08:39.748 00:08:39.748 ' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.748 --rc genhtml_branch_coverage=1 00:08:39.748 --rc genhtml_function_coverage=1 00:08:39.748 --rc genhtml_legend=1 00:08:39.748 --rc geninfo_all_blocks=1 00:08:39.748 --rc geninfo_unexecuted_blocks=1 00:08:39.748 00:08:39.748 ' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.748 --rc genhtml_branch_coverage=1 00:08:39.748 --rc genhtml_function_coverage=1 00:08:39.748 --rc genhtml_legend=1 00:08:39.748 --rc geninfo_all_blocks=1 00:08:39.748 --rc geninfo_unexecuted_blocks=1 00:08:39.748 00:08:39.748 ' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.748 --rc genhtml_branch_coverage=1 00:08:39.748 --rc genhtml_function_coverage=1 00:08:39.748 --rc genhtml_legend=1 00:08:39.748 --rc geninfo_all_blocks=1 00:08:39.748 --rc geninfo_unexecuted_blocks=1 00:08:39.748 00:08:39.748 ' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.748 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:39.749 Cannot find device "nvmf_init_br" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:39.749 Cannot find device "nvmf_init_br2" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:39.749 Cannot find device "nvmf_tgt_br" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.749 Cannot find device "nvmf_tgt_br2" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:39.749 Cannot find device "nvmf_init_br" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:39.749 Cannot find device "nvmf_init_br2" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:39.749 Cannot find device "nvmf_tgt_br" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:39.749 Cannot find device "nvmf_tgt_br2" 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:39.749 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:40.007 Cannot find device "nvmf_br" 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:40.007 Cannot find device "nvmf_init_if" 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:40.007 Cannot find device "nvmf_init_if2" 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:40.007 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:40.008 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:40.266 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:40.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:40.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:40.266 00:08:40.266 --- 10.0.0.3 ping statistics --- 00:08:40.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.266 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:40.266 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:40.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:40.267 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:08:40.267 00:08:40.267 --- 10.0.0.4 ping statistics --- 00:08:40.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.267 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:40.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:40.267 00:08:40.267 --- 10.0.0.1 ping statistics --- 00:08:40.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.267 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:40.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:40.267 00:08:40.267 --- 10.0.0.2 ping statistics --- 00:08:40.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.267 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=65247 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 65247 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65247 ']' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.267 14:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.267 [2024-12-10 14:14:04.943637] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:40.267 [2024-12-10 14:14:04.943702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.267 [2024-12-10 14:14:05.092173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.524 [2024-12-10 14:14:05.124255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.524 [2024-12-10 14:14:05.124304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.524 [2024-12-10 14:14:05.124314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.524 [2024-12-10 14:14:05.124321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.524 [2024-12-10 14:14:05.124327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.524 [2024-12-10 14:14:05.124604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.524 [2024-12-10 14:14:05.151975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 [2024-12-10 14:14:05.254570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 Malloc0 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.524 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.525 [2024-12-10 14:14:05.296833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:40.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65266 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65266 /var/tmp/bdevperf.sock 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65266 ']' 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.525 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [2024-12-10 14:14:05.368782] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:40.783 [2024-12-10 14:14:05.369235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65266 ] 00:08:40.783 [2024-12-10 14:14:05.533528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.783 [2024-12-10 14:14:05.572479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.783 [2024-12-10 14:14:05.605640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.041 NVMe0n1 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.041 14:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:41.041 Running I/O for 10 seconds... 00:08:43.356 6982.00 IOPS, 27.27 MiB/s [2024-12-10T14:14:09.130Z] 7458.50 IOPS, 29.13 MiB/s [2024-12-10T14:14:10.067Z] 7812.00 IOPS, 30.52 MiB/s [2024-12-10T14:14:11.014Z] 8062.75 IOPS, 31.50 MiB/s [2024-12-10T14:14:11.951Z] 8240.80 IOPS, 32.19 MiB/s [2024-12-10T14:14:12.887Z] 8368.50 IOPS, 32.69 MiB/s [2024-12-10T14:14:14.266Z] 8407.57 IOPS, 32.84 MiB/s [2024-12-10T14:14:15.203Z] 8580.00 IOPS, 33.52 MiB/s [2024-12-10T14:14:16.140Z] 8680.89 IOPS, 33.91 MiB/s [2024-12-10T14:14:16.140Z] 8792.60 IOPS, 34.35 MiB/s 00:08:51.303 Latency(us) 00:08:51.303 [2024-12-10T14:14:16.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.303 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:51.303 Verification LBA range: start 0x0 length 0x4000 00:08:51.303 NVMe0n1 : 10.08 8818.23 34.45 0.00 0.00 115569.04 27048.49 91512.09 00:08:51.303 [2024-12-10T14:14:16.140Z] =================================================================================================================== 00:08:51.303 [2024-12-10T14:14:16.140Z] Total : 8818.23 34.45 0.00 0.00 115569.04 27048.49 91512.09 00:08:51.303 { 00:08:51.303 "results": [ 00:08:51.303 { 00:08:51.303 "job": "NVMe0n1", 00:08:51.303 "core_mask": "0x1", 00:08:51.303 "workload": "verify", 00:08:51.303 "status": "finished", 00:08:51.303 "verify_range": { 00:08:51.303 "start": 0, 00:08:51.303 "length": 16384 00:08:51.303 }, 00:08:51.303 "queue_depth": 1024, 00:08:51.303 "io_size": 4096, 00:08:51.303 "runtime": 10.080372, 00:08:51.303 "iops": 8818.226152764997, 00:08:51.303 "mibps": 34.44619590923827, 00:08:51.303 "io_failed": 0, 00:08:51.303 "io_timeout": 0, 00:08:51.303 "avg_latency_us": 115569.03939938701, 00:08:51.303 "min_latency_us": 27048.494545454545, 00:08:51.303 "max_latency_us": 91512.08727272728 00:08:51.303 } 00:08:51.303 ], 00:08:51.303 "core_count": 1 00:08:51.303 } 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65266 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65266 ']' 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65266 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.303 14:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65266 00:08:51.303 killing process with pid 65266 00:08:51.303 Received shutdown signal, test time was about 10.000000 seconds 00:08:51.303 00:08:51.303 Latency(us) 00:08:51.303 [2024-12-10T14:14:16.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.303 [2024-12-10T14:14:16.140Z] =================================================================================================================== 00:08:51.303 [2024-12-10T14:14:16.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65266' 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65266 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65266 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.303 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.562 rmmod nvme_tcp 00:08:51.562 rmmod nvme_fabrics 00:08:51.562 rmmod nvme_keyring 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 65247 ']' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 65247 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65247 ']' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65247 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65247 00:08:51.562 killing process with pid 65247 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65247' 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65247 00:08:51.562 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65247 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.821 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:52.080 00:08:52.080 real 0m12.373s 00:08:52.080 user 0m21.181s 00:08:52.080 sys 0m2.140s 00:08:52.080 ************************************ 00:08:52.080 END TEST nvmf_queue_depth 00:08:52.080 ************************************ 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.080 ************************************ 00:08:52.080 START TEST nvmf_target_multipath 00:08:52.080 ************************************ 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.080 * Looking for test storage... 00:08:52.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:52.080 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.081 --rc genhtml_branch_coverage=1 00:08:52.081 --rc genhtml_function_coverage=1 00:08:52.081 --rc genhtml_legend=1 00:08:52.081 --rc geninfo_all_blocks=1 00:08:52.081 --rc geninfo_unexecuted_blocks=1 00:08:52.081 00:08:52.081 ' 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.081 --rc genhtml_branch_coverage=1 00:08:52.081 --rc genhtml_function_coverage=1 00:08:52.081 --rc genhtml_legend=1 00:08:52.081 --rc geninfo_all_blocks=1 00:08:52.081 --rc geninfo_unexecuted_blocks=1 00:08:52.081 00:08:52.081 ' 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.081 --rc genhtml_branch_coverage=1 00:08:52.081 --rc genhtml_function_coverage=1 00:08:52.081 --rc genhtml_legend=1 00:08:52.081 --rc geninfo_all_blocks=1 00:08:52.081 --rc geninfo_unexecuted_blocks=1 00:08:52.081 00:08:52.081 ' 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.081 --rc genhtml_branch_coverage=1 00:08:52.081 --rc genhtml_function_coverage=1 00:08:52.081 --rc genhtml_legend=1 00:08:52.081 --rc geninfo_all_blocks=1 00:08:52.081 --rc geninfo_unexecuted_blocks=1 00:08:52.081 00:08:52.081 ' 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.081 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.339 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:52.340 Cannot find device "nvmf_init_br" 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:52.340 Cannot find device "nvmf_init_br2" 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:52.340 Cannot find device "nvmf_tgt_br" 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.340 Cannot find device "nvmf_tgt_br2" 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:52.340 14:14:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:52.340 Cannot find device "nvmf_init_br" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:52.340 Cannot find device "nvmf_init_br2" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:52.340 Cannot find device "nvmf_tgt_br" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:52.340 Cannot find device "nvmf_tgt_br2" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:52.340 Cannot find device "nvmf_br" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:52.340 Cannot find device "nvmf_init_if" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:52.340 Cannot find device "nvmf_init_if2" 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:52.340 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:52.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:52.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:52.599 00:08:52.599 --- 10.0.0.3 ping statistics --- 00:08:52.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.599 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:52.599 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:52.599 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:08:52.599 00:08:52.599 --- 10.0.0.4 ping statistics --- 00:08:52.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.599 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:52.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:52.599 00:08:52.599 --- 10.0.0.1 ping statistics --- 00:08:52.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.599 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:52.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:52.599 00:08:52.599 --- 10.0.0.2 ping statistics --- 00:08:52.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.599 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65635 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65635 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65635 ']' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.599 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:52.599 [2024-12-10 14:14:17.406733] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:08:52.599 [2024-12-10 14:14:17.407008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.858 [2024-12-10 14:14:17.559819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.858 [2024-12-10 14:14:17.600427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.858 [2024-12-10 14:14:17.600484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.858 [2024-12-10 14:14:17.600508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.858 [2024-12-10 14:14:17.600518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.858 [2024-12-10 14:14:17.600526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.858 [2024-12-10 14:14:17.601419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.858 [2024-12-10 14:14:17.601728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.858 [2024-12-10 14:14:17.602429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.858 [2024-12-10 14:14:17.602488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.858 [2024-12-10 14:14:17.636790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.858 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.858 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:52.858 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.858 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.858 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.117 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.117 14:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.375 [2024-12-10 14:14:18.016774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.375 14:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:53.634 Malloc0 00:08:53.634 14:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:53.893 14:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.152 14:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:54.411 [2024-12-10 14:14:19.128259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:54.411 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:54.670 [2024-12-10 14:14:19.408532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:54.670 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:54.929 14:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:56.873 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:56.873 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:56.873 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65717 00:08:57.133 14:14:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:57.133 [global] 00:08:57.133 thread=1 00:08:57.133 invalidate=1 00:08:57.133 rw=randrw 00:08:57.133 time_based=1 00:08:57.133 runtime=6 00:08:57.133 ioengine=libaio 00:08:57.133 direct=1 00:08:57.133 bs=4096 00:08:57.133 iodepth=128 00:08:57.133 norandommap=0 00:08:57.133 numjobs=1 00:08:57.133 00:08:57.133 verify_dump=1 00:08:57.133 verify_backlog=512 00:08:57.133 verify_state_save=0 00:08:57.133 do_verify=1 00:08:57.133 verify=crc32c-intel 00:08:57.133 [job0] 00:08:57.133 filename=/dev/nvme0n1 00:08:57.133 Could not set queue depth (nvme0n1) 00:08:57.133 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:57.133 fio-3.35 00:08:57.133 Starting 1 thread 00:08:58.071 14:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:58.330 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:58.589 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:58.848 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.108 14:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65717 00:09:03.298 00:09:03.298 job0: (groupid=0, jobs=1): err= 0: pid=65743: Tue Dec 10 14:14:28 2024 00:09:03.298 read: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(240MiB/6006msec) 00:09:03.298 slat (usec): min=4, max=9347, avg=57.83, stdev=228.86 00:09:03.298 clat (usec): min=1667, max=17740, avg=8498.71, stdev=1535.53 00:09:03.298 lat (usec): min=1685, max=17751, avg=8556.53, stdev=1541.18 00:09:03.298 clat percentiles (usec): 00:09:03.298 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:03.298 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:03.298 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[11994], 00:09:03.298 | 99.00th=[13435], 99.50th=[14091], 99.90th=[15795], 99.95th=[16712], 00:09:03.298 | 99.99th=[17433] 00:09:03.298 bw ( KiB/s): min= 5792, max=28024, per=51.09%, avg=20944.73, stdev=6872.82, samples=11 00:09:03.298 iops : min= 1448, max= 7006, avg=5236.18, stdev=1718.21, samples=11 00:09:03.298 write: IOPS=6034, BW=23.6MiB/s (24.7MB/s)(126MiB/5324msec); 0 zone resets 00:09:03.298 slat (usec): min=15, max=2797, avg=66.19, stdev=160.86 00:09:03.298 clat (usec): min=2454, max=16801, avg=7426.95, stdev=1377.36 00:09:03.298 lat (usec): min=2478, max=16828, avg=7493.14, stdev=1382.45 00:09:03.298 clat percentiles (usec): 00:09:03.298 | 1.00th=[ 3326], 5.00th=[ 4359], 10.00th=[ 5997], 20.00th=[ 6849], 00:09:03.298 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:03.298 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8979], 00:09:03.298 | 99.00th=[11731], 99.50th=[12518], 99.90th=[15270], 99.95th=[15926], 00:09:03.298 | 99.99th=[16909] 00:09:03.299 bw ( KiB/s): min= 6248, max=27448, per=87.07%, avg=21016.00, stdev=6695.26, samples=11 00:09:03.299 iops : min= 1562, max= 6862, avg=5254.00, stdev=1673.81, samples=11 00:09:03.299 lat (msec) : 2=0.02%, 4=1.50%, 10=91.23%, 20=7.25% 00:09:03.299 cpu : usr=5.66%, sys=21.20%, ctx=5424, majf=0, minf=90 00:09:03.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:03.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.299 issued rwts: total=61549,32128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.299 00:09:03.299 Run status group 0 (all jobs): 00:09:03.299 READ: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=240MiB (252MB), run=6006-6006msec 00:09:03.299 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=126MiB (132MB), run=5324-5324msec 00:09:03.299 00:09:03.299 Disk stats (read/write): 00:09:03.299 nvme0n1: ios=60651/31479, merge=0/0, ticks=493800/219494, in_queue=713294, util=98.51% 00:09:03.299 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:03.557 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65824 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:03.816 14:14:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:04.075 [global] 00:09:04.075 thread=1 00:09:04.075 invalidate=1 00:09:04.075 rw=randrw 00:09:04.075 time_based=1 00:09:04.075 runtime=6 00:09:04.075 ioengine=libaio 00:09:04.075 direct=1 00:09:04.075 bs=4096 00:09:04.075 iodepth=128 00:09:04.075 norandommap=0 00:09:04.075 numjobs=1 00:09:04.075 00:09:04.075 verify_dump=1 00:09:04.075 verify_backlog=512 00:09:04.075 verify_state_save=0 00:09:04.075 do_verify=1 00:09:04.075 verify=crc32c-intel 00:09:04.075 [job0] 00:09:04.075 filename=/dev/nvme0n1 00:09:04.075 Could not set queue depth (nvme0n1) 00:09:04.075 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.075 fio-3.35 00:09:04.075 Starting 1 thread 00:09:05.013 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:05.272 14:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:05.531 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:05.790 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:06.049 14:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65824 00:09:10.257 00:09:10.257 job0: (groupid=0, jobs=1): err= 0: pid=65845: Tue Dec 10 14:14:34 2024 00:09:10.257 read: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(267MiB/6007msec) 00:09:10.257 slat (usec): min=3, max=6608, avg=42.04, stdev=187.88 00:09:10.257 clat (usec): min=1641, max=15368, avg=7562.05, stdev=1830.41 00:09:10.257 lat (usec): min=1652, max=15376, avg=7604.09, stdev=1846.19 00:09:10.257 clat percentiles (usec): 00:09:10.257 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 5080], 20.00th=[ 5932], 00:09:10.257 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:09:10.257 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10552], 00:09:10.257 | 99.00th=[12649], 99.50th=[13173], 99.90th=[13829], 99.95th=[13960], 00:09:10.257 | 99.99th=[14484] 00:09:10.257 bw ( KiB/s): min=11840, max=39216, per=54.72%, avg=24950.91, stdev=8221.97, samples=11 00:09:10.257 iops : min= 2960, max= 9804, avg=6237.73, stdev=2055.49, samples=11 00:09:10.257 write: IOPS=6910, BW=27.0MiB/s (28.3MB/s)(147MiB/5458msec); 0 zone resets 00:09:10.257 slat (usec): min=11, max=1876, avg=54.30, stdev=136.10 00:09:10.257 clat (usec): min=1725, max=14324, avg=6495.21, stdev=1741.58 00:09:10.257 lat (usec): min=1749, max=14344, avg=6549.51, stdev=1757.70 00:09:10.257 clat percentiles (usec): 00:09:10.257 | 1.00th=[ 2737], 5.00th=[ 3490], 10.00th=[ 3884], 20.00th=[ 4555], 00:09:10.257 | 30.00th=[ 5342], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:10.257 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:09:10.257 | 99.00th=[10421], 99.50th=[11338], 99.90th=[12518], 99.95th=[12911], 00:09:10.257 | 99.99th=[13960] 00:09:10.257 bw ( KiB/s): min=12263, max=38552, per=90.16%, avg=24924.00, stdev=8024.67, samples=11 00:09:10.257 iops : min= 3065, max= 9638, avg=6230.91, stdev=2006.29, samples=11 00:09:10.257 lat (msec) : 2=0.08%, 4=6.16%, 10=89.81%, 20=3.96% 00:09:10.257 cpu : usr=5.89%, sys=22.59%, ctx=5811, majf=0, minf=125 00:09:10.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:10.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.257 issued rwts: total=68471,37720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.257 00:09:10.257 Run status group 0 (all jobs): 00:09:10.257 READ: bw=44.5MiB/s (46.7MB/s), 44.5MiB/s-44.5MiB/s (46.7MB/s-46.7MB/s), io=267MiB (280MB), run=6007-6007msec 00:09:10.257 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=147MiB (155MB), run=5458-5458msec 00:09:10.257 00:09:10.257 Disk stats (read/write): 00:09:10.257 nvme0n1: ios=67700/36846, merge=0/0, ticks=489706/224260, in_queue=713966, util=98.56% 00:09:10.257 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:10.257 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.258 14:14:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:10.258 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.516 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:10.516 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.775 rmmod nvme_tcp 00:09:10.775 rmmod nvme_fabrics 00:09:10.775 rmmod nvme_keyring 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65635 ']' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65635 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65635 ']' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65635 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65635 00:09:10.775 killing process with pid 65635 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65635' 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65635 00:09:10.775 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65635 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.034 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:11.035 00:09:11.035 real 0m19.141s 00:09:11.035 user 1m10.503s 00:09:11.035 sys 0m10.124s 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.035 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 ************************************ 00:09:11.035 END TEST nvmf_target_multipath 00:09:11.035 ************************************ 00:09:11.294 14:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:11.294 14:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.294 14:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.294 14:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.294 ************************************ 00:09:11.294 START TEST nvmf_zcopy 00:09:11.294 ************************************ 00:09:11.294 14:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:11.294 * Looking for test storage... 00:09:11.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.294 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.295 --rc genhtml_branch_coverage=1 00:09:11.295 --rc genhtml_function_coverage=1 00:09:11.295 --rc genhtml_legend=1 00:09:11.295 --rc geninfo_all_blocks=1 00:09:11.295 --rc geninfo_unexecuted_blocks=1 00:09:11.295 00:09:11.295 ' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.295 --rc genhtml_branch_coverage=1 00:09:11.295 --rc genhtml_function_coverage=1 00:09:11.295 --rc genhtml_legend=1 00:09:11.295 --rc geninfo_all_blocks=1 00:09:11.295 --rc geninfo_unexecuted_blocks=1 00:09:11.295 00:09:11.295 ' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.295 --rc genhtml_branch_coverage=1 00:09:11.295 --rc genhtml_function_coverage=1 00:09:11.295 --rc genhtml_legend=1 00:09:11.295 --rc geninfo_all_blocks=1 00:09:11.295 --rc geninfo_unexecuted_blocks=1 00:09:11.295 00:09:11.295 ' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.295 --rc genhtml_branch_coverage=1 00:09:11.295 --rc genhtml_function_coverage=1 00:09:11.295 --rc genhtml_legend=1 00:09:11.295 --rc geninfo_all_blocks=1 00:09:11.295 --rc geninfo_unexecuted_blocks=1 00:09:11.295 00:09:11.295 ' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.295 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:11.295 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.296 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:11.554 Cannot find device "nvmf_init_br" 00:09:11.554 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:11.554 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:11.554 Cannot find device "nvmf_init_br2" 00:09:11.554 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:11.554 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:11.554 Cannot find device "nvmf_tgt_br" 00:09:11.554 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.555 Cannot find device "nvmf_tgt_br2" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:11.555 Cannot find device "nvmf_init_br" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:11.555 Cannot find device "nvmf_init_br2" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:11.555 Cannot find device "nvmf_tgt_br" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:11.555 Cannot find device "nvmf_tgt_br2" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:11.555 Cannot find device "nvmf_br" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:11.555 Cannot find device "nvmf_init_if" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:11.555 Cannot find device "nvmf_init_if2" 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:11.555 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:11.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:11.813 00:09:11.813 --- 10.0.0.3 ping statistics --- 00:09:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.813 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:11.813 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:11.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:11.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:09:11.814 00:09:11.814 --- 10.0.0.4 ping statistics --- 00:09:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.814 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:11.814 00:09:11.814 --- 10.0.0.1 ping statistics --- 00:09:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.814 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:11.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:09:11.814 00:09:11.814 --- 10.0.0.2 ping statistics --- 00:09:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.814 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=66146 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 66146 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 66146 ']' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.814 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.814 [2024-12-10 14:14:36.606880] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:09:11.814 [2024-12-10 14:14:36.607031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.072 [2024-12-10 14:14:36.763531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.072 [2024-12-10 14:14:36.801507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.072 [2024-12-10 14:14:36.801584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.072 [2024-12-10 14:14:36.801608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.072 [2024-12-10 14:14:36.801619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.072 [2024-12-10 14:14:36.801628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.072 [2024-12-10 14:14:36.802026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.072 [2024-12-10 14:14:36.835878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.072 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.072 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:12.072 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.072 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.072 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 [2024-12-10 14:14:36.930278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 [2024-12-10 14:14:36.946428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 malloc0 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.331 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.332 { 00:09:12.332 "params": { 00:09:12.332 "name": "Nvme$subsystem", 00:09:12.332 "trtype": "$TEST_TRANSPORT", 00:09:12.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.332 "adrfam": "ipv4", 00:09:12.332 "trsvcid": "$NVMF_PORT", 00:09:12.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.332 "hdgst": ${hdgst:-false}, 00:09:12.332 "ddgst": ${ddgst:-false} 00:09:12.332 }, 00:09:12.332 "method": "bdev_nvme_attach_controller" 00:09:12.332 } 00:09:12.332 EOF 00:09:12.332 )") 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:12.332 14:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.332 "params": { 00:09:12.332 "name": "Nvme1", 00:09:12.332 "trtype": "tcp", 00:09:12.332 "traddr": "10.0.0.3", 00:09:12.332 "adrfam": "ipv4", 00:09:12.332 "trsvcid": "4420", 00:09:12.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.332 "hdgst": false, 00:09:12.332 "ddgst": false 00:09:12.332 }, 00:09:12.332 "method": "bdev_nvme_attach_controller" 00:09:12.332 }' 00:09:12.332 [2024-12-10 14:14:37.028219] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:09:12.332 [2024-12-10 14:14:37.028319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66171 ] 00:09:12.591 [2024-12-10 14:14:37.178045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.591 [2024-12-10 14:14:37.216969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.591 [2024-12-10 14:14:37.258490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.591 Running I/O for 10 seconds... 00:09:14.536 6448.00 IOPS, 50.38 MiB/s [2024-12-10T14:14:40.751Z] 6453.50 IOPS, 50.42 MiB/s [2024-12-10T14:14:41.688Z] 6494.67 IOPS, 50.74 MiB/s [2024-12-10T14:14:42.624Z] 6529.00 IOPS, 51.01 MiB/s [2024-12-10T14:14:43.561Z] 6566.00 IOPS, 51.30 MiB/s [2024-12-10T14:14:44.498Z] 6594.83 IOPS, 51.52 MiB/s [2024-12-10T14:14:45.434Z] 6620.86 IOPS, 51.73 MiB/s [2024-12-10T14:14:46.371Z] 6627.88 IOPS, 51.78 MiB/s [2024-12-10T14:14:47.790Z] 6630.00 IOPS, 51.80 MiB/s [2024-12-10T14:14:47.790Z] 6617.40 IOPS, 51.70 MiB/s 00:09:22.953 Latency(us) 00:09:22.953 [2024-12-10T14:14:47.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.953 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:22.953 Verification LBA range: start 0x0 length 0x1000 00:09:22.953 Nvme1n1 : 10.01 6619.51 51.71 0.00 0.00 19274.33 1079.85 32887.16 00:09:22.953 [2024-12-10T14:14:47.790Z] =================================================================================================================== 00:09:22.953 [2024-12-10T14:14:47.790Z] Total : 6619.51 51.71 0.00 0.00 19274.33 1079.85 32887.16 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66284 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:22.953 { 00:09:22.953 "params": { 00:09:22.953 "name": "Nvme$subsystem", 00:09:22.953 "trtype": "$TEST_TRANSPORT", 00:09:22.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.953 "adrfam": "ipv4", 00:09:22.953 "trsvcid": "$NVMF_PORT", 00:09:22.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.953 "hdgst": ${hdgst:-false}, 00:09:22.953 "ddgst": ${ddgst:-false} 00:09:22.953 }, 00:09:22.953 "method": "bdev_nvme_attach_controller" 00:09:22.953 } 00:09:22.953 EOF 00:09:22.953 )") 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:22.953 [2024-12-10 14:14:47.511094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.953 [2024-12-10 14:14:47.511131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:22.953 14:14:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:22.953 "params": { 00:09:22.953 "name": "Nvme1", 00:09:22.953 "trtype": "tcp", 00:09:22.953 "traddr": "10.0.0.3", 00:09:22.953 "adrfam": "ipv4", 00:09:22.953 "trsvcid": "4420", 00:09:22.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.953 "hdgst": false, 00:09:22.953 "ddgst": false 00:09:22.954 }, 00:09:22.954 "method": "bdev_nvme_attach_controller" 00:09:22.954 }' 00:09:22.954 [2024-12-10 14:14:47.523042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.523067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.534997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.535029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.547040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.547063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.550473] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:09:22.954 [2024-12-10 14:14:47.550529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66284 ] 00:09:22.954 [2024-12-10 14:14:47.559054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.559075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.571086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.571110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.579105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.579125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.587090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.587130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.599093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.599115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.611114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.611169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.623106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.623127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.635117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.635139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.647111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.647149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.659120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.659141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.671124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.671161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.683134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.683171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.693063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.954 [2024-12-10 14:14:47.695170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.695216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.707190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.707229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.719190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.719225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.729157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.954 [2024-12-10 14:14:47.731157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.731196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.743190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.743236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.755187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.755239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.766754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.954 [2024-12-10 14:14:47.767207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.767231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.954 [2024-12-10 14:14:47.779208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.954 [2024-12-10 14:14:47.779259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.791181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.791205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.803204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.803251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.815203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.815245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.827211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.827253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.839225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.839252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.851256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.851285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.863244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.863288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 Running I/O for 5 seconds... 00:09:23.213 [2024-12-10 14:14:47.875248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.875302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.892227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.892315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.909836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.909880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.925587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.213 [2024-12-10 14:14:47.925630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.213 [2024-12-10 14:14:47.943827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:47.943871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:47.959741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:47.959784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:47.977022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:47.977064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:47.992032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:47.992075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:48.003153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:48.003198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:48.019007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:48.019062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.214 [2024-12-10 14:14:48.036302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.214 [2024-12-10 14:14:48.036345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.052397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.052425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.070185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.070228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.085021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.085065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.100084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.100173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.109043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.109070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.124602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.124645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.140807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.140851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.157385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.157428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.173311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.173354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.190362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.190405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.207847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.207891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.223057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.223100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.234496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.234540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.250352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.250410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.266911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.266990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.284057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.284099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.473 [2024-12-10 14:14:48.299401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.473 [2024-12-10 14:14:48.299462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.311064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.311118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.326723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.326771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.343937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.344006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.359482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.359524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.369037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.369094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.383793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.383839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.401039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.401083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.417213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.417257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.434181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.434224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.449932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.450003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.465494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.465538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.483502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.483546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.497122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.497164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.513178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.513221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.529399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.529448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.546957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.547025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.733 [2024-12-10 14:14:48.561985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.733 [2024-12-10 14:14:48.562041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.572133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.572178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.587814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.587859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.603967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.604001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.621402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.621441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.636320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.636363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.652667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.652710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.992 [2024-12-10 14:14:48.669250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.992 [2024-12-10 14:14:48.669311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.686360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.686403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.703159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.703202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.720549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.720592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.736129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.736176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.746121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.746148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.760296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.760384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.776634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.776688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.792280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.792324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.803619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.803662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.993 [2024-12-10 14:14:48.821039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.993 [2024-12-10 14:14:48.821112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.836777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.836839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.853584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.853628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.870692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.870723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 12560.00 IOPS, 98.12 MiB/s [2024-12-10T14:14:49.089Z] [2024-12-10 14:14:48.886407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.886483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.905864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.905909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.920186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.920229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.252 [2024-12-10 14:14:48.935334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.252 [2024-12-10 14:14:48.935379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:48.952216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:48.952260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:48.970281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:48.970325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:48.984267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:48.984312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:48.999968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.000038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:49.017333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.017377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:49.032551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.032594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:49.048264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.048308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:49.064833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.064877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.253 [2024-12-10 14:14:49.081883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.253 [2024-12-10 14:14:49.081927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.096450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.096494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.112086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.112135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.129076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.129119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.144903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.144947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.163009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.163084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.176669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.176722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.193843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.193886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.208983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.209037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.220298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.220356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.237104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.237148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.252649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.252693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.271428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.271475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.285514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.285557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.302285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.302329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.317798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.317874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.512 [2024-12-10 14:14:49.334522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.512 [2024-12-10 14:14:49.334578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.351990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.352057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.369385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.369432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.385474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.385522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.401783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.401826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.413527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.413571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.429346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.429404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.445907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.445950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.463047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.463102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.480245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.480290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.496757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.496802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.514070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.514113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.531115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.531158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.546267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.546311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.563074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.563100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.578799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.578843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.772 [2024-12-10 14:14:49.597351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.772 [2024-12-10 14:14:49.597394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.612431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.612475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.630178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.630219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.644235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.644280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.659558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.659603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.669724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.669750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.684518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.684543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.694180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.694205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.709844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.709870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.728859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.728924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.743135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.743179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.759571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.759616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.775618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.775662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.792261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.031 [2024-12-10 14:14:49.792321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.031 [2024-12-10 14:14:49.807859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.032 [2024-12-10 14:14:49.807910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.032 [2024-12-10 14:14:49.825086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.032 [2024-12-10 14:14:49.825129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.032 [2024-12-10 14:14:49.841605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.032 [2024-12-10 14:14:49.841649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.032 [2024-12-10 14:14:49.857290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.032 [2024-12-10 14:14:49.857335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.867316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.867359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 12511.00 IOPS, 97.74 MiB/s [2024-12-10T14:14:50.128Z] [2024-12-10 14:14:49.881767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.881812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.896980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.897055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.916020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.916086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.932300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.932344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.948307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.948361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.966150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.966208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.980600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.980646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:49.998089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:49.998132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.012759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.012807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.028577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.028623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.046672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.046707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.060541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.060591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.075619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.075663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.087005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.087061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.102075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.102139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.291 [2024-12-10 14:14:50.113374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.291 [2024-12-10 14:14:50.113434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.550 [2024-12-10 14:14:50.129763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.550 [2024-12-10 14:14:50.129794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.550 [2024-12-10 14:14:50.145634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.550 [2024-12-10 14:14:50.145680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.164250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.164278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.178667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.178697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.194261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.194308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.211549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.211577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.227060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.227088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.242521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.242573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.261231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.261276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.275884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.275929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.291078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.291144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.300170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.300220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.316723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.316768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.333941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.334016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.349823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.349868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.368048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.368074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.551 [2024-12-10 14:14:50.381402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.551 [2024-12-10 14:14:50.381446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.398340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.398385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.414151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.414197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.431917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.431963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.446334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.446395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.462161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.462189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.478016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.478074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.496292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.496351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.511277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.511325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.526907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.526956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.537110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.537140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.552709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.552756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.569400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.569447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.586982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.587039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.601984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.602043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.617485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.617530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.810 [2024-12-10 14:14:50.635671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.810 [2024-12-10 14:14:50.635715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.652297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.652343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.669381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.669441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.685582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.685611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.703499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.703547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.719492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.719537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.735276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.735320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.753763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.753792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.768548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.768574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.783804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.783851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.793982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.794056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.809139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.809186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.824459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.824505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.836072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.836100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.852859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.852903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.867353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.867397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 12280.00 IOPS, 95.94 MiB/s [2024-12-10T14:14:50.906Z] [2024-12-10 14:14:50.883707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.883750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.069 [2024-12-10 14:14:50.900265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.069 [2024-12-10 14:14:50.900309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:50.917054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:50.917083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:50.933241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:50.933284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:50.949687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:50.949718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:50.966700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:50.966732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:50.982923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:50.982982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.000171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.000216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.015312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.015375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.025624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.025670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.040219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.040264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.058002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.058048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.074123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.074168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.090276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.090321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.109252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.109296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.123662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.123707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.139549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.139594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.328 [2024-12-10 14:14:51.157912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.328 [2024-12-10 14:14:51.157956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.172421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.172483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.187798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.187843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.206761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.206792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.222738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.222771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.240954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.241026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.256582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.256631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.267841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.267886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.284215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.284288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.300068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.300132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.309550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.309594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.324913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.324959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.339978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.340039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.351250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.351296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.367603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.367648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.384719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.384763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.401035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.587 [2024-12-10 14:14:51.401080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.587 [2024-12-10 14:14:51.418200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.588 [2024-12-10 14:14:51.418244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.846 [2024-12-10 14:14:51.433671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.846 [2024-12-10 14:14:51.433717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.846 [2024-12-10 14:14:51.450315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.846 [2024-12-10 14:14:51.450362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.846 [2024-12-10 14:14:51.465879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.846 [2024-12-10 14:14:51.465925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.846 [2024-12-10 14:14:51.483004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.483042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.499288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.499349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.516954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.517009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.532457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.532503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.542192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.542222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.557167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.557212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.573053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.573098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.590796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.590843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.606400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.606449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.624645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.624710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.639697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.639754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.650751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.650797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.847 [2024-12-10 14:14:51.667354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.847 [2024-12-10 14:14:51.667398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.683571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.683616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.699979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.700031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.716453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.716498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.733624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.733668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.750424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.750470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.767954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.768009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.783267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.783312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.793214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.793259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.807958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.808020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.819064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.819093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.833887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.833913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.851368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.851395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.865724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.865769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 12240.25 IOPS, 95.63 MiB/s [2024-12-10T14:14:51.943Z] [2024-12-10 14:14:51.882831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.882878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.896676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.896721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.913444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.913489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.106 [2024-12-10 14:14:51.929961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.106 [2024-12-10 14:14:51.930040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:51.946217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:51.946264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:51.963070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:51.963115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:51.979525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:51.979570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:51.998307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:51.998351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.013471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.013539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.025397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.025479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.040027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.040080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.055864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.055909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.072935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.072989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.089398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.089443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.105153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.105199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.121693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.121739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.131358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.131402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.147623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.147668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.165166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.165215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.181079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.181123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.364 [2024-12-10 14:14:52.198276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.364 [2024-12-10 14:14:52.198307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.214240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.214285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.223482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.223526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.240157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.240228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.256569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.256620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.273761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.273807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.289355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.289399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.307332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.307377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.322181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.322244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.338485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.338531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.353983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.354043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.362930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.363013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.378119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.378147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.393434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.393479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.411572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.411617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.427279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.427325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.622 [2024-12-10 14:14:52.446526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.622 [2024-12-10 14:14:52.446598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.461849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.461895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.479053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.479099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.495805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.495853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.512871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.512924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.528544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.528589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.543528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.543573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.553419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.553463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.568724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.568787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.583859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.583905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.593215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.593260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.608530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.608576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.622209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.622271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.638711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.638758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.654528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.654597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.672200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.672245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.688148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.688192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.699766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.881 [2024-12-10 14:14:52.699810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.881 [2024-12-10 14:14:52.708171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.882 [2024-12-10 14:14:52.708217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.723938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.724043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.733384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.733428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.747628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.747673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.756458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.756502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.771743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.771787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.782876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.782919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.799704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.799748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.140 [2024-12-10 14:14:52.814832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.140 [2024-12-10 14:14:52.814879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.831071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.831118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.847831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.847876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.857484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.857529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.873514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.873560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 12240.20 IOPS, 95.63 MiB/s [2024-12-10T14:14:52.978Z] [2024-12-10 14:14:52.883422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.883449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 00:09:28.141 Latency(us) 00:09:28.141 [2024-12-10T14:14:52.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.141 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:28.141 Nvme1n1 : 5.01 12242.74 95.65 0.00 0.00 10442.27 3813.00 21805.61 00:09:28.141 [2024-12-10T14:14:52.978Z] =================================================================================================================== 00:09:28.141 [2024-12-10T14:14:52.978Z] Total : 12242.74 95.65 0.00 0.00 10442.27 3813.00 21805.61 00:09:28.141 [2024-12-10 14:14:52.892744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.892770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.900749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.900783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.912780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.912815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.924784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.924821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.936785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.936844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.948779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.948836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.960830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.960892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.141 [2024-12-10 14:14:52.972771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.141 [2024-12-10 14:14:52.972800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 [2024-12-10 14:14:52.980766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.400 [2024-12-10 14:14:52.980790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 [2024-12-10 14:14:52.992819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.400 [2024-12-10 14:14:52.992890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 [2024-12-10 14:14:53.004780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.400 [2024-12-10 14:14:53.004824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 [2024-12-10 14:14:53.012767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.400 [2024-12-10 14:14:53.012790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 [2024-12-10 14:14:53.024764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.400 [2024-12-10 14:14:53.024801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.400 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66284) - No such process 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66284 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.400 delay0 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.400 14:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:28.400 [2024-12-10 14:14:53.228305] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:34.967 Initializing NVMe Controllers 00:09:34.967 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.967 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:34.967 Initialization complete. Launching workers. 00:09:34.967 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 72 00:09:34.967 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 359, failed to submit 33 00:09:34.967 success 242, unsuccessful 117, failed 0 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.967 rmmod nvme_tcp 00:09:34.967 rmmod nvme_fabrics 00:09:34.967 rmmod nvme_keyring 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 66146 ']' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 66146 ']' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:34.967 killing process with pid 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66146' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 66146 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:34.967 00:09:34.967 real 0m23.884s 00:09:34.967 user 0m39.049s 00:09:34.967 sys 0m6.726s 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.967 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.967 ************************************ 00:09:34.967 END TEST nvmf_zcopy 00:09:34.967 ************************************ 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.227 ************************************ 00:09:35.227 START TEST nvmf_nmic 00:09:35.227 ************************************ 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.227 * Looking for test storage... 00:09:35.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.227 14:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.227 --rc genhtml_branch_coverage=1 00:09:35.227 --rc genhtml_function_coverage=1 00:09:35.227 --rc genhtml_legend=1 00:09:35.227 --rc geninfo_all_blocks=1 00:09:35.227 --rc geninfo_unexecuted_blocks=1 00:09:35.227 00:09:35.227 ' 00:09:35.227 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.227 --rc genhtml_branch_coverage=1 00:09:35.227 --rc genhtml_function_coverage=1 00:09:35.227 --rc genhtml_legend=1 00:09:35.227 --rc geninfo_all_blocks=1 00:09:35.228 --rc geninfo_unexecuted_blocks=1 00:09:35.228 00:09:35.228 ' 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.228 --rc genhtml_branch_coverage=1 00:09:35.228 --rc genhtml_function_coverage=1 00:09:35.228 --rc genhtml_legend=1 00:09:35.228 --rc geninfo_all_blocks=1 00:09:35.228 --rc geninfo_unexecuted_blocks=1 00:09:35.228 00:09:35.228 ' 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.228 --rc genhtml_branch_coverage=1 00:09:35.228 --rc genhtml_function_coverage=1 00:09:35.228 --rc genhtml_legend=1 00:09:35.228 --rc geninfo_all_blocks=1 00:09:35.228 --rc geninfo_unexecuted_blocks=1 00:09:35.228 00:09:35.228 ' 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.228 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.487 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.487 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.487 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.487 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.487 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.488 Cannot find device "nvmf_init_br" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.488 Cannot find device "nvmf_init_br2" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.488 Cannot find device "nvmf_tgt_br" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.488 Cannot find device "nvmf_tgt_br2" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.488 Cannot find device "nvmf_init_br" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.488 Cannot find device "nvmf_init_br2" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.488 Cannot find device "nvmf_tgt_br" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.488 Cannot find device "nvmf_tgt_br2" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.488 Cannot find device "nvmf_br" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.488 Cannot find device "nvmf_init_if" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.488 Cannot find device "nvmf_init_if2" 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.488 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:35.748 00:09:35.748 --- 10.0.0.3 ping statistics --- 00:09:35.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.748 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:35.748 00:09:35.748 --- 10.0.0.4 ping statistics --- 00:09:35.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.748 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:35.748 00:09:35.748 --- 10.0.0.1 ping statistics --- 00:09:35.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.748 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:09:35.748 00:09:35.748 --- 10.0.0.2 ping statistics --- 00:09:35.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.748 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66663 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66663 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66663 ']' 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.748 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.748 [2024-12-10 14:15:00.529844] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:09:35.748 [2024-12-10 14:15:00.529947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.007 [2024-12-10 14:15:00.680797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.007 [2024-12-10 14:15:00.720416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.007 [2024-12-10 14:15:00.720474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.007 [2024-12-10 14:15:00.720499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.007 [2024-12-10 14:15:00.720509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.007 [2024-12-10 14:15:00.720518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.007 [2024-12-10 14:15:00.721438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.007 [2024-12-10 14:15:00.721514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.007 [2024-12-10 14:15:00.721581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.007 [2024-12-10 14:15:00.721583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.007 [2024-12-10 14:15:00.754418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.007 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.007 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:36.007 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.007 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.007 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 [2024-12-10 14:15:00.853597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 Malloc0 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 [2024-12-10 14:15:00.915648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 test case1: single bdev can't be used in multiple subsystems 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 [2024-12-10 14:15:00.939500] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:36.266 [2024-12-10 14:15:00.939558] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:36.266 [2024-12-10 14:15:00.939572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.266 request: 00:09:36.266 { 00:09:36.266 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:36.266 "namespace": { 00:09:36.266 "bdev_name": "Malloc0", 00:09:36.266 "no_auto_visible": false, 00:09:36.266 "hide_metadata": false 00:09:36.266 }, 00:09:36.266 "method": "nvmf_subsystem_add_ns", 00:09:36.266 "req_id": 1 00:09:36.266 } 00:09:36.266 Got JSON-RPC error response 00:09:36.266 response: 00:09:36.266 { 00:09:36.266 "code": -32602, 00:09:36.266 "message": "Invalid parameters" 00:09:36.266 } 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:36.266 Adding namespace failed - expected result. 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:36.266 test case2: host connect to nvmf target in multiple paths 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.266 [2024-12-10 14:15:00.951626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.266 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:36.266 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:36.525 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.525 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.525 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.525 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:36.525 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:38.429 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:38.430 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.688 [global] 00:09:38.689 thread=1 00:09:38.689 invalidate=1 00:09:38.689 rw=write 00:09:38.689 time_based=1 00:09:38.689 runtime=1 00:09:38.689 ioengine=libaio 00:09:38.689 direct=1 00:09:38.689 bs=4096 00:09:38.689 iodepth=1 00:09:38.689 norandommap=0 00:09:38.689 numjobs=1 00:09:38.689 00:09:38.689 verify_dump=1 00:09:38.689 verify_backlog=512 00:09:38.689 verify_state_save=0 00:09:38.689 do_verify=1 00:09:38.689 verify=crc32c-intel 00:09:38.689 [job0] 00:09:38.689 filename=/dev/nvme0n1 00:09:38.689 Could not set queue depth (nvme0n1) 00:09:38.689 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.689 fio-3.35 00:09:38.689 Starting 1 thread 00:09:40.093 00:09:40.093 job0: (groupid=0, jobs=1): err= 0: pid=66742: Tue Dec 10 14:15:04 2024 00:09:40.093 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:40.093 slat (nsec): min=11061, max=50265, avg=13841.18, stdev=4242.95 00:09:40.093 clat (usec): min=127, max=787, avg=170.78, stdev=26.62 00:09:40.093 lat (usec): min=138, max=799, avg=184.62, stdev=27.26 00:09:40.093 clat percentiles (usec): 00:09:40.093 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:40.093 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:09:40.093 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 212], 00:09:40.093 | 99.00th=[ 243], 99.50th=[ 273], 99.90th=[ 383], 99.95th=[ 437], 00:09:40.093 | 99.99th=[ 791] 00:09:40.093 write: IOPS=3413, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec); 0 zone resets 00:09:40.093 slat (usec): min=13, max=101, avg=20.50, stdev= 6.29 00:09:40.093 clat (usec): min=76, max=292, avg=103.29, stdev=17.55 00:09:40.093 lat (usec): min=93, max=313, avg=123.79, stdev=19.62 00:09:40.093 clat percentiles (usec): 00:09:40.093 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 90], 00:09:40.093 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 102], 00:09:40.093 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 127], 95.00th=[ 139], 00:09:40.094 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 208], 00:09:40.094 | 99.99th=[ 293] 00:09:40.094 bw ( KiB/s): min=13272, max=13272, per=97.20%, avg=13272.00, stdev= 0.00, samples=1 00:09:40.094 iops : min= 3318, max= 3318, avg=3318.00, stdev= 0.00, samples=1 00:09:40.094 lat (usec) : 100=28.74%, 250=70.87%, 500=0.37%, 1000=0.02% 00:09:40.094 cpu : usr=2.30%, sys=8.90%, ctx=6489, majf=0, minf=5 00:09:40.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.094 issued rwts: total=3072,3417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.094 00:09:40.094 Run status group 0 (all jobs): 00:09:40.094 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:40.094 WRITE: bw=13.3MiB/s (14.0MB/s), 13.3MiB/s-13.3MiB/s (14.0MB/s-14.0MB/s), io=13.3MiB (14.0MB), run=1001-1001msec 00:09:40.094 00:09:40.094 Disk stats (read/write): 00:09:40.094 nvme0n1: ios=2792/3072, merge=0/0, ticks=531/372, in_queue=903, util=91.28% 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.094 rmmod nvme_tcp 00:09:40.094 rmmod nvme_fabrics 00:09:40.094 rmmod nvme_keyring 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66663 ']' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66663 ']' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.094 killing process with pid 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66663' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66663 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:40.094 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:40.354 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:40.354 00:09:40.354 real 0m5.299s 00:09:40.354 user 0m15.587s 00:09:40.354 sys 0m2.315s 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.354 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.354 ************************************ 00:09:40.354 END TEST nvmf_nmic 00:09:40.354 ************************************ 00:09:40.611 14:15:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:40.611 14:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.611 14:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.611 14:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 ************************************ 00:09:40.611 START TEST nvmf_fio_target 00:09:40.611 ************************************ 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:40.612 * Looking for test storage... 00:09:40.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.612 --rc genhtml_branch_coverage=1 00:09:40.612 --rc genhtml_function_coverage=1 00:09:40.612 --rc genhtml_legend=1 00:09:40.612 --rc geninfo_all_blocks=1 00:09:40.612 --rc geninfo_unexecuted_blocks=1 00:09:40.612 00:09:40.612 ' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.612 --rc genhtml_branch_coverage=1 00:09:40.612 --rc genhtml_function_coverage=1 00:09:40.612 --rc genhtml_legend=1 00:09:40.612 --rc geninfo_all_blocks=1 00:09:40.612 --rc geninfo_unexecuted_blocks=1 00:09:40.612 00:09:40.612 ' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.612 --rc genhtml_branch_coverage=1 00:09:40.612 --rc genhtml_function_coverage=1 00:09:40.612 --rc genhtml_legend=1 00:09:40.612 --rc geninfo_all_blocks=1 00:09:40.612 --rc geninfo_unexecuted_blocks=1 00:09:40.612 00:09:40.612 ' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.612 --rc genhtml_branch_coverage=1 00:09:40.612 --rc genhtml_function_coverage=1 00:09:40.612 --rc genhtml_legend=1 00:09:40.612 --rc geninfo_all_blocks=1 00:09:40.612 --rc geninfo_unexecuted_blocks=1 00:09:40.612 00:09:40.612 ' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.612 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.612 Cannot find device "nvmf_init_br" 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:40.612 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.612 Cannot find device "nvmf_init_br2" 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.870 Cannot find device "nvmf_tgt_br" 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.870 Cannot find device "nvmf_tgt_br2" 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.870 Cannot find device "nvmf_init_br" 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.870 Cannot find device "nvmf_init_br2" 00:09:40.870 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:40.871 Cannot find device "nvmf_tgt_br" 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:40.871 Cannot find device "nvmf_tgt_br2" 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:40.871 Cannot find device "nvmf_br" 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:40.871 Cannot find device "nvmf_init_if" 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:40.871 Cannot find device "nvmf_init_if2" 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:40.871 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:41.130 00:09:41.130 --- 10.0.0.3 ping statistics --- 00:09:41.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.130 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.130 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.130 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:41.130 00:09:41.130 --- 10.0.0.4 ping statistics --- 00:09:41.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.130 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:41.130 00:09:41.130 --- 10.0.0.1 ping statistics --- 00:09:41.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.130 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:41.130 00:09:41.130 --- 10.0.0.2 ping statistics --- 00:09:41.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.130 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.130 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66974 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66974 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66974 ']' 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.131 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.131 [2024-12-10 14:15:05.880426] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:09:41.131 [2024-12-10 14:15:05.880512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.390 [2024-12-10 14:15:06.027069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.390 [2024-12-10 14:15:06.060069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.390 [2024-12-10 14:15:06.060150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.390 [2024-12-10 14:15:06.060159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.390 [2024-12-10 14:15:06.060167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.390 [2024-12-10 14:15:06.060174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.390 [2024-12-10 14:15:06.061035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.390 [2024-12-10 14:15:06.061453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.390 [2024-12-10 14:15:06.061613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.390 [2024-12-10 14:15:06.061619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.390 [2024-12-10 14:15:06.091892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.390 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.390 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:41.391 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.391 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.391 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.391 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.391 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:41.649 [2024-12-10 14:15:06.463826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.908 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.166 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:42.166 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.424 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:42.424 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.682 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:42.682 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.940 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:42.940 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:43.199 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.457 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:43.457 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.024 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:44.024 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.024 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:44.025 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:44.283 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.850 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.850 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.850 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.850 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:45.110 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:45.369 [2024-12-10 14:15:10.128067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:45.369 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:45.632 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:45.891 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:46.150 14:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:48.052 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.052 [global] 00:09:48.052 thread=1 00:09:48.052 invalidate=1 00:09:48.052 rw=write 00:09:48.052 time_based=1 00:09:48.052 runtime=1 00:09:48.052 ioengine=libaio 00:09:48.052 direct=1 00:09:48.052 bs=4096 00:09:48.052 iodepth=1 00:09:48.052 norandommap=0 00:09:48.052 numjobs=1 00:09:48.052 00:09:48.052 verify_dump=1 00:09:48.052 verify_backlog=512 00:09:48.052 verify_state_save=0 00:09:48.052 do_verify=1 00:09:48.052 verify=crc32c-intel 00:09:48.052 [job0] 00:09:48.052 filename=/dev/nvme0n1 00:09:48.052 [job1] 00:09:48.052 filename=/dev/nvme0n2 00:09:48.052 [job2] 00:09:48.052 filename=/dev/nvme0n3 00:09:48.052 [job3] 00:09:48.052 filename=/dev/nvme0n4 00:09:48.052 Could not set queue depth (nvme0n1) 00:09:48.052 Could not set queue depth (nvme0n2) 00:09:48.052 Could not set queue depth (nvme0n3) 00:09:48.052 Could not set queue depth (nvme0n4) 00:09:48.311 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.311 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.311 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.311 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.311 fio-3.35 00:09:48.311 Starting 4 threads 00:09:49.689 00:09:49.689 job0: (groupid=0, jobs=1): err= 0: pid=67151: Tue Dec 10 14:15:14 2024 00:09:49.689 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:49.689 slat (nsec): min=8177, max=61514, avg=17602.22, stdev=7245.30 00:09:49.689 clat (usec): min=158, max=2487, avg=375.12, stdev=101.05 00:09:49.689 lat (usec): min=173, max=2516, avg=392.72, stdev=102.40 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 223], 5.00th=[ 245], 10.00th=[ 265], 20.00th=[ 318], 00:09:49.689 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 388], 00:09:49.689 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 465], 95.00th=[ 490], 00:09:49.689 | 99.00th=[ 562], 99.50th=[ 611], 99.90th=[ 1844], 99.95th=[ 2474], 00:09:49.689 | 99.99th=[ 2474] 00:09:49.689 write: IOPS=1709, BW=6837KiB/s (7001kB/s)(6844KiB/1001msec); 0 zone resets 00:09:49.689 slat (nsec): min=11096, max=89046, avg=22022.43, stdev=7397.07 00:09:49.689 clat (usec): min=102, max=480, avg=206.22, stdev=43.61 00:09:49.689 lat (usec): min=124, max=513, avg=228.24, stdev=44.77 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 129], 5.00th=[ 145], 10.00th=[ 161], 20.00th=[ 176], 00:09:49.689 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:09:49.689 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 285], 00:09:49.689 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 457], 99.95th=[ 482], 00:09:49.689 | 99.99th=[ 482] 00:09:49.689 bw ( KiB/s): min= 8192, max= 8192, per=24.50%, avg=8192.00, stdev= 0.00, samples=1 00:09:49.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:49.689 lat (usec) : 250=48.78%, 500=49.21%, 750=1.91% 00:09:49.689 lat (msec) : 2=0.06%, 4=0.03% 00:09:49.689 cpu : usr=1.60%, sys=5.40%, ctx=3248, majf=0, minf=9 00:09:49.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 issued rwts: total=1536,1711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.689 job1: (groupid=0, jobs=1): err= 0: pid=67152: Tue Dec 10 14:15:14 2024 00:09:49.689 read: IOPS=1298, BW=5195KiB/s (5319kB/s)(5200KiB/1001msec) 00:09:49.689 slat (nsec): min=14084, max=94320, avg=32329.58, stdev=13944.28 00:09:49.689 clat (usec): min=136, max=1764, avg=452.02, stdev=157.42 00:09:49.689 lat (usec): min=164, max=1793, avg=484.35, stdev=166.55 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 178], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 306], 00:09:49.689 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 469], 60.00th=[ 529], 00:09:49.689 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 668], 00:09:49.689 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 816], 99.95th=[ 1762], 00:09:49.689 | 99.99th=[ 1762] 00:09:49.689 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:49.689 slat (nsec): min=19876, max=87311, avg=28451.72, stdev=7656.88 00:09:49.689 clat (usec): min=93, max=480, avg=206.65, stdev=51.98 00:09:49.689 lat (usec): min=115, max=543, avg=235.10, stdev=53.10 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 116], 5.00th=[ 139], 10.00th=[ 151], 20.00th=[ 165], 00:09:49.689 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 196], 60.00th=[ 212], 00:09:49.689 | 70.00th=[ 231], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 293], 00:09:49.689 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 482], 00:09:49.689 | 99.99th=[ 482] 00:09:49.689 bw ( KiB/s): min= 8192, max= 8192, per=24.50%, avg=8192.00, stdev= 0.00, samples=1 00:09:49.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:49.689 lat (usec) : 100=0.11%, 250=48.31%, 500=30.47%, 750=21.02%, 1000=0.07% 00:09:49.689 lat (msec) : 2=0.04% 00:09:49.689 cpu : usr=2.50%, sys=6.30%, ctx=2836, majf=0, minf=12 00:09:49.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 issued rwts: total=1300,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.689 job2: (groupid=0, jobs=1): err= 0: pid=67153: Tue Dec 10 14:15:14 2024 00:09:49.689 read: IOPS=1570, BW=6282KiB/s (6432kB/s)(6288KiB/1001msec) 00:09:49.689 slat (nsec): min=8503, max=70785, avg=17706.15, stdev=6098.09 00:09:49.689 clat (usec): min=148, max=3707, avg=349.95, stdev=175.45 00:09:49.689 lat (usec): min=163, max=3736, avg=367.66, stdev=175.55 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 196], 00:09:49.689 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 371], 00:09:49.689 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 482], 95.00th=[ 506], 00:09:49.689 | 99.00th=[ 603], 99.50th=[ 1057], 99.90th=[ 3130], 99.95th=[ 3720], 00:09:49.689 | 99.99th=[ 3720] 00:09:49.689 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:49.689 slat (usec): min=10, max=100, avg=19.65, stdev= 6.95 00:09:49.689 clat (usec): min=110, max=482, avg=183.18, stdev=43.67 00:09:49.689 lat (usec): min=133, max=497, avg=202.83, stdev=41.74 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 143], 00:09:49.689 | 30.00th=[ 155], 40.00th=[ 167], 50.00th=[ 178], 60.00th=[ 192], 00:09:49.689 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 265], 00:09:49.689 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 375], 99.95th=[ 375], 00:09:49.689 | 99.99th=[ 482] 00:09:49.689 bw ( KiB/s): min= 8192, max= 8192, per=24.50%, avg=8192.00, stdev= 0.00, samples=1 00:09:49.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:49.689 lat (usec) : 250=62.46%, 500=35.06%, 750=2.24%, 1000=0.03% 00:09:49.689 lat (msec) : 2=0.14%, 4=0.08% 00:09:49.689 cpu : usr=1.90%, sys=5.60%, ctx=3622, majf=0, minf=13 00:09:49.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 issued rwts: total=1572,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.689 job3: (groupid=0, jobs=1): err= 0: pid=67154: Tue Dec 10 14:15:14 2024 00:09:49.689 read: IOPS=2689, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:09:49.689 slat (nsec): min=11398, max=49585, avg=14877.58, stdev=4664.14 00:09:49.689 clat (usec): min=140, max=474, avg=176.78, stdev=20.37 00:09:49.689 lat (usec): min=153, max=489, avg=191.66, stdev=21.17 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:49.689 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:09:49.689 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 215], 00:09:49.689 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 262], 99.95th=[ 273], 00:09:49.689 | 99.99th=[ 474] 00:09:49.689 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:49.689 slat (nsec): min=14004, max=90729, avg=21958.07, stdev=6572.32 00:09:49.689 clat (usec): min=95, max=225, avg=132.55, stdev=18.47 00:09:49.689 lat (usec): min=114, max=316, avg=154.50, stdev=20.01 00:09:49.689 clat percentiles (usec): 00:09:49.689 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 118], 00:09:49.689 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:09:49.689 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 167], 00:09:49.689 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 204], 00:09:49.689 | 99.99th=[ 227] 00:09:49.689 bw ( KiB/s): min=12288, max=12288, per=36.75%, avg=12288.00, stdev= 0.00, samples=1 00:09:49.689 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:49.689 lat (usec) : 100=0.49%, 250=99.39%, 500=0.12% 00:09:49.689 cpu : usr=2.60%, sys=8.10%, ctx=5764, majf=0, minf=5 00:09:49.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.689 issued rwts: total=2692,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.689 00:09:49.689 Run status group 0 (all jobs): 00:09:49.690 READ: bw=27.7MiB/s (29.1MB/s), 5195KiB/s-10.5MiB/s (5319kB/s-11.0MB/s), io=27.7MiB (29.1MB), run=1001-1001msec 00:09:49.690 WRITE: bw=32.7MiB/s (34.2MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:09:49.690 00:09:49.690 Disk stats (read/write): 00:09:49.690 nvme0n1: ios=1321/1536, merge=0/0, ticks=498/318, in_queue=816, util=86.87% 00:09:49.690 nvme0n2: ios=1076/1536, merge=0/0, ticks=476/343, in_queue=819, util=87.50% 00:09:49.690 nvme0n3: ios=1475/1536, merge=0/0, ticks=503/261, in_queue=764, util=88.53% 00:09:49.690 nvme0n4: ios=2338/2560, merge=0/0, ticks=421/373, in_queue=794, util=89.72% 00:09:49.690 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:49.690 [global] 00:09:49.690 thread=1 00:09:49.690 invalidate=1 00:09:49.690 rw=randwrite 00:09:49.690 time_based=1 00:09:49.690 runtime=1 00:09:49.690 ioengine=libaio 00:09:49.690 direct=1 00:09:49.690 bs=4096 00:09:49.690 iodepth=1 00:09:49.690 norandommap=0 00:09:49.690 numjobs=1 00:09:49.690 00:09:49.690 verify_dump=1 00:09:49.690 verify_backlog=512 00:09:49.690 verify_state_save=0 00:09:49.690 do_verify=1 00:09:49.690 verify=crc32c-intel 00:09:49.690 [job0] 00:09:49.690 filename=/dev/nvme0n1 00:09:49.690 [job1] 00:09:49.690 filename=/dev/nvme0n2 00:09:49.690 [job2] 00:09:49.690 filename=/dev/nvme0n3 00:09:49.690 [job3] 00:09:49.690 filename=/dev/nvme0n4 00:09:49.690 Could not set queue depth (nvme0n1) 00:09:49.690 Could not set queue depth (nvme0n2) 00:09:49.690 Could not set queue depth (nvme0n3) 00:09:49.690 Could not set queue depth (nvme0n4) 00:09:49.690 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.690 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.690 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.690 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.690 fio-3.35 00:09:49.690 Starting 4 threads 00:09:51.066 00:09:51.066 job0: (groupid=0, jobs=1): err= 0: pid=67217: Tue Dec 10 14:15:15 2024 00:09:51.066 read: IOPS=1825, BW=7301KiB/s (7476kB/s)(7308KiB/1001msec) 00:09:51.066 slat (nsec): min=11566, max=57718, avg=15705.41, stdev=4460.85 00:09:51.066 clat (usec): min=165, max=900, avg=283.53, stdev=61.63 00:09:51.066 lat (usec): min=178, max=922, avg=299.23, stdev=63.08 00:09:51.066 clat percentiles (usec): 00:09:51.066 | 1.00th=[ 194], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:09:51.066 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:09:51.066 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 441], 00:09:51.066 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 898], 00:09:51.066 | 99.99th=[ 898] 00:09:51.066 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:51.066 slat (usec): min=17, max=115, avg=23.01, stdev= 6.59 00:09:51.066 clat (usec): min=95, max=976, avg=194.67, stdev=35.63 00:09:51.066 lat (usec): min=113, max=1092, avg=217.68, stdev=36.35 00:09:51.066 clat percentiles (usec): 00:09:51.066 | 1.00th=[ 109], 5.00th=[ 126], 10.00th=[ 161], 20.00th=[ 180], 00:09:51.066 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:09:51.066 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 239], 00:09:51.066 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 482], 99.95th=[ 498], 00:09:51.066 | 99.99th=[ 979] 00:09:51.066 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:51.066 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:51.066 lat (usec) : 100=0.15%, 250=58.22%, 500=39.97%, 750=1.60%, 1000=0.05% 00:09:51.066 cpu : usr=1.80%, sys=5.90%, ctx=3875, majf=0, minf=9 00:09:51.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.066 issued rwts: total=1827,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.066 job1: (groupid=0, jobs=1): err= 0: pid=67218: Tue Dec 10 14:15:15 2024 00:09:51.066 read: IOPS=3040, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:09:51.066 slat (nsec): min=11035, max=42713, avg=13560.05, stdev=3325.48 00:09:51.066 clat (usec): min=135, max=1550, avg=166.60, stdev=29.09 00:09:51.066 lat (usec): min=147, max=1562, avg=180.16, stdev=29.30 00:09:51.066 clat percentiles (usec): 00:09:51.066 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:09:51.066 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:51.066 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:09:51.066 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 233], 00:09:51.066 | 99.99th=[ 1549] 00:09:51.066 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:51.066 slat (usec): min=13, max=111, avg=20.03, stdev= 5.40 00:09:51.066 clat (usec): min=89, max=210, avg=123.64, stdev=14.36 00:09:51.066 lat (usec): min=107, max=321, avg=143.67, stdev=15.58 00:09:51.066 clat percentiles (usec): 00:09:51.066 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 112], 00:09:51.066 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:09:51.066 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 151], 00:09:51.066 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 206], 00:09:51.066 | 99.99th=[ 210] 00:09:51.066 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:51.066 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:51.066 lat (usec) : 100=1.36%, 250=98.63% 00:09:51.066 lat (msec) : 2=0.02% 00:09:51.066 cpu : usr=2.50%, sys=7.80%, ctx=6116, majf=0, minf=8 00:09:51.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 issued rwts: total=3044,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.067 job2: (groupid=0, jobs=1): err= 0: pid=67219: Tue Dec 10 14:15:15 2024 00:09:51.067 read: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:51.067 slat (nsec): min=11195, max=44864, avg=13984.53, stdev=3272.00 00:09:51.067 clat (usec): min=139, max=1557, avg=178.33, stdev=31.49 00:09:51.067 lat (usec): min=151, max=1570, avg=192.31, stdev=31.67 00:09:51.067 clat percentiles (usec): 00:09:51.067 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:51.067 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:51.067 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:09:51.067 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 247], 99.95th=[ 322], 00:09:51.067 | 99.99th=[ 1565] 00:09:51.067 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:51.067 slat (nsec): min=14387, max=79476, avg=21724.86, stdev=5433.29 00:09:51.067 clat (usec): min=98, max=1946, avg=134.57, stdev=35.82 00:09:51.067 lat (usec): min=116, max=1966, avg=156.30, stdev=36.28 00:09:51.067 clat percentiles (usec): 00:09:51.067 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:09:51.067 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:09:51.067 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:09:51.067 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 210], 99.95th=[ 221], 00:09:51.067 | 99.99th=[ 1942] 00:09:51.067 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:51.067 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:51.067 lat (usec) : 100=0.02%, 250=99.93%, 500=0.02% 00:09:51.067 lat (msec) : 2=0.03% 00:09:51.067 cpu : usr=2.60%, sys=8.00%, ctx=5719, majf=0, minf=19 00:09:51.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 issued rwts: total=2647,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.067 job3: (groupid=0, jobs=1): err= 0: pid=67220: Tue Dec 10 14:15:15 2024 00:09:51.067 read: IOPS=1786, BW=7145KiB/s (7316kB/s)(7152KiB/1001msec) 00:09:51.067 slat (nsec): min=11690, max=55779, avg=15292.36, stdev=4976.77 00:09:51.067 clat (usec): min=199, max=650, avg=280.72, stdev=40.28 00:09:51.067 lat (usec): min=214, max=666, avg=296.02, stdev=42.03 00:09:51.067 clat percentiles (usec): 00:09:51.067 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:09:51.067 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:51.067 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 359], 00:09:51.067 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 627], 99.95th=[ 652], 00:09:51.067 | 99.99th=[ 652] 00:09:51.067 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:51.067 slat (usec): min=15, max=107, avg=21.73, stdev= 7.26 00:09:51.067 clat (usec): min=116, max=2014, avg=204.90, stdev=63.70 00:09:51.067 lat (usec): min=134, max=2036, avg=226.63, stdev=65.51 00:09:51.067 clat percentiles (usec): 00:09:51.067 | 1.00th=[ 125], 5.00th=[ 141], 10.00th=[ 172], 20.00th=[ 184], 00:09:51.067 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:09:51.067 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 255], 00:09:51.067 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 865], 99.95th=[ 1287], 00:09:51.067 | 99.99th=[ 2008] 00:09:51.067 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:51.067 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:51.067 lat (usec) : 250=55.01%, 500=44.60%, 750=0.29%, 1000=0.05% 00:09:51.067 lat (msec) : 2=0.03%, 4=0.03% 00:09:51.067 cpu : usr=1.10%, sys=6.00%, ctx=3836, majf=0, minf=12 00:09:51.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.067 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.067 00:09:51.067 Run status group 0 (all jobs): 00:09:51.067 READ: bw=36.3MiB/s (38.1MB/s), 7145KiB/s-11.9MiB/s (7316kB/s-12.5MB/s), io=36.4MiB (38.1MB), run=1001-1001msec 00:09:51.067 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:09:51.067 00:09:51.067 Disk stats (read/write): 00:09:51.067 nvme0n1: ios=1585/1870, merge=0/0, ticks=454/377, in_queue=831, util=88.04% 00:09:51.067 nvme0n2: ios=2584/2725, merge=0/0, ticks=470/358, in_queue=828, util=89.13% 00:09:51.067 nvme0n3: ios=2344/2560, merge=0/0, ticks=426/371, in_queue=797, util=89.36% 00:09:51.067 nvme0n4: ios=1536/1760, merge=0/0, ticks=438/367, in_queue=805, util=89.82% 00:09:51.067 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:51.067 [global] 00:09:51.067 thread=1 00:09:51.067 invalidate=1 00:09:51.067 rw=write 00:09:51.067 time_based=1 00:09:51.067 runtime=1 00:09:51.067 ioengine=libaio 00:09:51.067 direct=1 00:09:51.067 bs=4096 00:09:51.067 iodepth=128 00:09:51.067 norandommap=0 00:09:51.067 numjobs=1 00:09:51.067 00:09:51.067 verify_dump=1 00:09:51.067 verify_backlog=512 00:09:51.067 verify_state_save=0 00:09:51.067 do_verify=1 00:09:51.067 verify=crc32c-intel 00:09:51.067 [job0] 00:09:51.067 filename=/dev/nvme0n1 00:09:51.067 [job1] 00:09:51.067 filename=/dev/nvme0n2 00:09:51.067 [job2] 00:09:51.067 filename=/dev/nvme0n3 00:09:51.067 [job3] 00:09:51.067 filename=/dev/nvme0n4 00:09:51.067 Could not set queue depth (nvme0n1) 00:09:51.067 Could not set queue depth (nvme0n2) 00:09:51.067 Could not set queue depth (nvme0n3) 00:09:51.067 Could not set queue depth (nvme0n4) 00:09:51.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.067 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.067 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.067 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.067 fio-3.35 00:09:51.067 Starting 4 threads 00:09:52.446 00:09:52.446 job0: (groupid=0, jobs=1): err= 0: pid=67274: Tue Dec 10 14:15:16 2024 00:09:52.446 read: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1002msec) 00:09:52.446 slat (usec): min=4, max=7022, avg=152.73, stdev=759.31 00:09:52.446 clat (usec): min=609, max=23064, avg=18887.28, stdev=2243.35 00:09:52.446 lat (usec): min=4748, max=23080, avg=19040.01, stdev=2134.44 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[ 5211], 5.00th=[16188], 10.00th=[16581], 20.00th=[17433], 00:09:52.446 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:09:52.446 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[21365], 00:09:52.446 | 99.00th=[22152], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:09:52.446 | 99.99th=[22938] 00:09:52.446 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:52.446 slat (usec): min=14, max=4642, avg=135.66, stdev=624.18 00:09:52.446 clat (usec): min=12073, max=22788, avg=18593.41, stdev=1622.62 00:09:52.446 lat (usec): min=12475, max=22814, avg=18729.07, stdev=1486.69 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[14091], 5.00th=[15926], 10.00th=[16450], 20.00th=[17433], 00:09:52.446 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:09:52.446 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20841], 95.00th=[21365], 00:09:52.446 | 99.00th=[21890], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:09:52.446 | 99.99th=[22676] 00:09:52.446 bw ( KiB/s): min=13832, max=14600, per=26.69%, avg=14216.00, stdev=543.06, samples=2 00:09:52.446 iops : min= 3458, max= 3650, avg=3554.00, stdev=135.76, samples=2 00:09:52.446 lat (usec) : 750=0.01% 00:09:52.446 lat (msec) : 10=0.95%, 20=80.91%, 50=18.13% 00:09:52.446 cpu : usr=3.80%, sys=10.49%, ctx=213, majf=0, minf=12 00:09:52.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:52.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.446 issued rwts: total=3169,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.446 job1: (groupid=0, jobs=1): err= 0: pid=67275: Tue Dec 10 14:15:16 2024 00:09:52.446 read: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1002msec) 00:09:52.446 slat (usec): min=5, max=4793, avg=146.14, stdev=726.48 00:09:52.446 clat (usec): min=609, max=22414, avg=19357.81, stdev=2073.42 00:09:52.446 lat (usec): min=4745, max=22437, avg=19503.95, stdev=1938.66 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[ 5211], 5.00th=[17171], 10.00th=[18744], 20.00th=[19006], 00:09:52.446 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:09:52.446 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[21103], 00:09:52.446 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:09:52.446 | 99.99th=[22414] 00:09:52.446 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:52.446 slat (usec): min=10, max=6771, avg=143.30, stdev=665.15 00:09:52.446 clat (usec): min=11579, max=21760, avg=18227.68, stdev=1421.27 00:09:52.446 lat (usec): min=11600, max=21807, avg=18370.98, stdev=1276.96 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[12387], 5.00th=[15664], 10.00th=[16319], 20.00th=[17433], 00:09:52.446 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:09:52.446 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19792], 95.00th=[20055], 00:09:52.446 | 99.00th=[21365], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:09:52.446 | 99.99th=[21890] 00:09:52.446 bw ( KiB/s): min=13576, max=14600, per=26.45%, avg=14088.00, stdev=724.08, samples=2 00:09:52.446 iops : min= 3394, max= 3650, avg=3522.00, stdev=181.02, samples=2 00:09:52.446 lat (usec) : 750=0.01% 00:09:52.446 lat (msec) : 10=0.95%, 20=85.17%, 50=13.87% 00:09:52.446 cpu : usr=3.70%, sys=9.99%, ctx=213, majf=0, minf=11 00:09:52.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:52.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.446 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.446 job2: (groupid=0, jobs=1): err= 0: pid=67276: Tue Dec 10 14:15:16 2024 00:09:52.446 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:52.446 slat (usec): min=4, max=13257, avg=217.59, stdev=1247.46 00:09:52.446 clat (usec): min=12066, max=52918, avg=28266.56, stdev=10174.92 00:09:52.446 lat (usec): min=14432, max=52950, avg=28484.15, stdev=10176.86 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[14484], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:09:52.446 | 30.00th=[22676], 40.00th=[25297], 50.00th=[25822], 60.00th=[26870], 00:09:52.446 | 70.00th=[30278], 80.00th=[38536], 90.00th=[45876], 95.00th=[49021], 00:09:52.446 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:09:52.446 | 99.99th=[52691] 00:09:52.446 write: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1006msec); 0 zone resets 00:09:52.446 slat (usec): min=10, max=11463, avg=142.27, stdev=711.33 00:09:52.446 clat (usec): min=5592, max=37841, avg=18479.55, stdev=5461.66 00:09:52.446 lat (usec): min=5655, max=37922, avg=18621.82, stdev=5447.50 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[10552], 5.00th=[13042], 10.00th=[13304], 20.00th=[13698], 00:09:52.446 | 30.00th=[14222], 40.00th=[17171], 50.00th=[17957], 60.00th=[18482], 00:09:52.446 | 70.00th=[19006], 80.00th=[21890], 90.00th=[26346], 95.00th=[31327], 00:09:52.446 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:09:52.446 | 99.99th=[38011] 00:09:52.446 bw ( KiB/s): min=10464, max=12015, per=21.10%, avg=11239.50, stdev=1096.72, samples=2 00:09:52.446 iops : min= 2616, max= 3003, avg=2809.50, stdev=273.65, samples=2 00:09:52.446 lat (msec) : 10=0.51%, 20=49.80%, 50=47.71%, 100=1.98% 00:09:52.446 cpu : usr=3.48%, sys=7.86%, ctx=173, majf=0, minf=9 00:09:52.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:52.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.446 issued rwts: total=2560,2940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.446 job3: (groupid=0, jobs=1): err= 0: pid=67277: Tue Dec 10 14:15:16 2024 00:09:52.446 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:52.446 slat (usec): min=5, max=7194, avg=170.24, stdev=757.20 00:09:52.446 clat (usec): min=12767, max=42777, avg=21781.80, stdev=4112.32 00:09:52.446 lat (usec): min=14224, max=42798, avg=21952.04, stdev=4140.50 00:09:52.446 clat percentiles (usec): 00:09:52.446 | 1.00th=[15008], 5.00th=[16319], 10.00th=[17957], 20.00th=[19268], 00:09:52.446 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[21103], 00:09:52.446 | 70.00th=[23725], 80.00th=[25822], 90.00th=[27657], 95.00th=[28443], 00:09:52.447 | 99.00th=[32900], 99.50th=[36439], 99.90th=[42730], 99.95th=[42730], 00:09:52.447 | 99.99th=[42730] 00:09:52.447 write: IOPS=3267, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1006msec); 0 zone resets 00:09:52.447 slat (usec): min=11, max=7606, avg=137.57, stdev=677.83 00:09:52.447 clat (usec): min=434, max=60575, avg=18262.80, stdev=10537.97 00:09:52.447 lat (usec): min=5630, max=60601, avg=18400.38, stdev=10613.01 00:09:52.447 clat percentiles (usec): 00:09:52.447 | 1.00th=[ 6259], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:09:52.447 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[15008], 00:09:52.447 | 70.00th=[16057], 80.00th=[18482], 90.00th=[36439], 95.00th=[44303], 00:09:52.447 | 99.00th=[51643], 99.50th=[54264], 99.90th=[60556], 99.95th=[60556], 00:09:52.447 | 99.99th=[60556] 00:09:52.447 bw ( KiB/s): min=12224, max=13048, per=23.72%, avg=12636.00, stdev=582.66, samples=2 00:09:52.447 iops : min= 3056, max= 3262, avg=3159.00, stdev=145.66, samples=2 00:09:52.447 lat (usec) : 500=0.02% 00:09:52.447 lat (msec) : 10=1.10%, 20=67.10%, 50=30.60%, 100=1.18% 00:09:52.447 cpu : usr=4.28%, sys=9.15%, ctx=245, majf=0, minf=7 00:09:52.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:52.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.447 issued rwts: total=3072,3287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.447 00:09:52.447 Run status group 0 (all jobs): 00:09:52.447 READ: bw=46.4MiB/s (48.6MB/s), 9.94MiB/s-12.4MiB/s (10.4MB/s-13.0MB/s), io=46.6MiB (48.9MB), run=1002-1006msec 00:09:52.447 WRITE: bw=52.0MiB/s (54.5MB/s), 11.4MiB/s-14.0MiB/s (12.0MB/s-14.7MB/s), io=52.3MiB (54.9MB), run=1002-1006msec 00:09:52.447 00:09:52.447 Disk stats (read/write): 00:09:52.447 nvme0n1: ios=2738/3072, merge=0/0, ticks=12266/12111, in_queue=24377, util=87.16% 00:09:52.447 nvme0n2: ios=2690/3072, merge=0/0, ticks=12292/12609, in_queue=24901, util=88.41% 00:09:52.447 nvme0n3: ios=2208/2560, merge=0/0, ticks=14802/10145, in_queue=24947, util=89.09% 00:09:52.447 nvme0n4: ios=2560/2718, merge=0/0, ticks=27411/23043, in_queue=50454, util=89.64% 00:09:52.447 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:52.447 [global] 00:09:52.447 thread=1 00:09:52.447 invalidate=1 00:09:52.447 rw=randwrite 00:09:52.447 time_based=1 00:09:52.447 runtime=1 00:09:52.447 ioengine=libaio 00:09:52.447 direct=1 00:09:52.447 bs=4096 00:09:52.447 iodepth=128 00:09:52.447 norandommap=0 00:09:52.447 numjobs=1 00:09:52.447 00:09:52.447 verify_dump=1 00:09:52.447 verify_backlog=512 00:09:52.447 verify_state_save=0 00:09:52.447 do_verify=1 00:09:52.447 verify=crc32c-intel 00:09:52.447 [job0] 00:09:52.447 filename=/dev/nvme0n1 00:09:52.447 [job1] 00:09:52.447 filename=/dev/nvme0n2 00:09:52.447 [job2] 00:09:52.447 filename=/dev/nvme0n3 00:09:52.447 [job3] 00:09:52.447 filename=/dev/nvme0n4 00:09:52.447 Could not set queue depth (nvme0n1) 00:09:52.447 Could not set queue depth (nvme0n2) 00:09:52.447 Could not set queue depth (nvme0n3) 00:09:52.447 Could not set queue depth (nvme0n4) 00:09:52.447 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.447 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.447 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.447 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.447 fio-3.35 00:09:52.447 Starting 4 threads 00:09:53.830 00:09:53.830 job0: (groupid=0, jobs=1): err= 0: pid=67332: Tue Dec 10 14:15:18 2024 00:09:53.830 read: IOPS=5236, BW=20.5MiB/s (21.4MB/s)(20.5MiB/1002msec) 00:09:53.830 slat (usec): min=7, max=4643, avg=93.51, stdev=401.46 00:09:53.830 clat (usec): min=1289, max=17936, avg=12157.05, stdev=1472.16 00:09:53.830 lat (usec): min=4652, max=17966, avg=12250.56, stdev=1474.18 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:09:53.830 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12256], 00:09:53.830 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:09:53.830 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16581], 99.95th=[16712], 00:09:53.830 | 99.99th=[17957] 00:09:53.830 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:53.830 slat (usec): min=10, max=5693, avg=82.93, stdev=466.33 00:09:53.830 clat (usec): min=4970, max=18333, avg=11185.72, stdev=1331.42 00:09:53.830 lat (usec): min=4994, max=18366, avg=11268.64, stdev=1401.41 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10159], 00:09:53.830 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:09:53.830 | 70.00th=[11731], 80.00th=[12256], 90.00th=[12649], 95.00th=[13304], 00:09:53.830 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16909], 99.95th=[17433], 00:09:53.830 | 99.99th=[18220] 00:09:53.830 bw ( KiB/s): min=20480, max=20480, per=29.33%, avg=20480.00, stdev= 0.00, samples=1 00:09:53.830 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:53.830 lat (msec) : 2=0.01%, 10=8.36%, 20=91.64% 00:09:53.830 cpu : usr=5.29%, sys=14.09%, ctx=383, majf=0, minf=13 00:09:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.830 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.830 job1: (groupid=0, jobs=1): err= 0: pid=67333: Tue Dec 10 14:15:18 2024 00:09:53.830 read: IOPS=3479, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1007msec) 00:09:53.830 slat (usec): min=8, max=9606, avg=152.21, stdev=671.90 00:09:53.830 clat (usec): min=2937, max=42988, avg=19800.39, stdev=7790.50 00:09:53.830 lat (usec): min=8135, max=43052, avg=19952.60, stdev=7829.32 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[10028], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:09:53.830 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14222], 60.00th=[22938], 00:09:53.830 | 70.00th=[25560], 80.00th=[27132], 90.00th=[31065], 95.00th=[33424], 00:09:53.830 | 99.00th=[36963], 99.50th=[39584], 99.90th=[40633], 99.95th=[42730], 00:09:53.830 | 99.99th=[42730] 00:09:53.830 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:09:53.830 slat (usec): min=10, max=4799, avg=122.50, stdev=458.41 00:09:53.830 clat (usec): min=9372, max=27930, avg=16048.85, stdev=4690.27 00:09:53.830 lat (usec): min=10874, max=27951, avg=16171.35, stdev=4713.78 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[10159], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:09:53.830 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[16450], 00:09:53.830 | 70.00th=[19792], 80.00th=[21365], 90.00th=[22676], 95.00th=[23987], 00:09:53.830 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27919], 99.95th=[27919], 00:09:53.830 | 99.99th=[27919] 00:09:53.830 bw ( KiB/s): min= 9784, max=18888, per=20.53%, avg=14336.00, stdev=6437.50, samples=2 00:09:53.830 iops : min= 2446, max= 4722, avg=3584.00, stdev=1609.38, samples=2 00:09:53.830 lat (msec) : 4=0.01%, 10=0.86%, 20=63.18%, 50=35.95% 00:09:53.830 cpu : usr=3.18%, sys=10.24%, ctx=694, majf=0, minf=7 00:09:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.830 issued rwts: total=3504,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.830 job2: (groupid=0, jobs=1): err= 0: pid=67334: Tue Dec 10 14:15:18 2024 00:09:53.830 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:09:53.830 slat (usec): min=5, max=8395, avg=163.53, stdev=674.51 00:09:53.830 clat (usec): min=9454, max=34603, avg=21147.15, stdev=6035.63 00:09:53.830 lat (usec): min=9472, max=34616, avg=21310.68, stdev=6069.09 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[12256], 5.00th=[13960], 10.00th=[14615], 20.00th=[15401], 00:09:53.830 | 30.00th=[15795], 40.00th=[16581], 50.00th=[19792], 60.00th=[24249], 00:09:53.830 | 70.00th=[26346], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:09:53.830 | 99.00th=[31589], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:09:53.830 | 99.99th=[34866] 00:09:53.830 write: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1004msec); 0 zone resets 00:09:53.830 slat (usec): min=10, max=7490, avg=133.91, stdev=635.65 00:09:53.830 clat (usec): min=3048, max=27273, avg=17606.30, stdev=4212.44 00:09:53.830 lat (usec): min=5220, max=27297, avg=17740.21, stdev=4254.56 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[ 8356], 5.00th=[12518], 10.00th=[13960], 20.00th=[14222], 00:09:53.830 | 30.00th=[14353], 40.00th=[14746], 50.00th=[16057], 60.00th=[18744], 00:09:53.830 | 70.00th=[20579], 80.00th=[21890], 90.00th=[23462], 95.00th=[24773], 00:09:53.830 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27132], 99.95th=[27395], 00:09:53.830 | 99.99th=[27395] 00:09:53.830 bw ( KiB/s): min=10712, max=16384, per=19.41%, avg=13548.00, stdev=4010.71, samples=2 00:09:53.830 iops : min= 2678, max= 4096, avg=3387.00, stdev=1002.68, samples=2 00:09:53.830 lat (msec) : 4=0.02%, 10=1.34%, 20=58.69%, 50=39.96% 00:09:53.830 cpu : usr=2.79%, sys=9.87%, ctx=606, majf=0, minf=13 00:09:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.830 issued rwts: total=3072,3514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.830 job3: (groupid=0, jobs=1): err= 0: pid=67335: Tue Dec 10 14:15:18 2024 00:09:53.830 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:53.830 slat (usec): min=5, max=6134, avg=105.06, stdev=485.79 00:09:53.830 clat (usec): min=10152, max=21414, avg=13841.69, stdev=1416.09 00:09:53.830 lat (usec): min=10175, max=21463, avg=13946.75, stdev=1434.35 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[10683], 5.00th=[11994], 10.00th=[12387], 20.00th=[12911], 00:09:53.830 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:09:53.830 | 70.00th=[14222], 80.00th=[15139], 90.00th=[15926], 95.00th=[16581], 00:09:53.830 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19530], 00:09:53.830 | 99.99th=[21365] 00:09:53.830 write: IOPS=4841, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec); 0 zone resets 00:09:53.830 slat (usec): min=11, max=6390, avg=98.31, stdev=580.43 00:09:53.830 clat (usec): min=517, max=20475, avg=12942.60, stdev=1638.83 00:09:53.830 lat (usec): min=5581, max=20516, avg=13040.91, stdev=1722.73 00:09:53.830 clat percentiles (usec): 00:09:53.830 | 1.00th=[ 6783], 5.00th=[10814], 10.00th=[11600], 20.00th=[12125], 00:09:53.830 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[13042], 00:09:53.830 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15533], 00:09:53.830 | 99.00th=[17433], 99.50th=[17957], 99.90th=[20055], 99.95th=[20317], 00:09:53.830 | 99.99th=[20579] 00:09:53.830 bw ( KiB/s): min=18712, max=18712, per=26.80%, avg=18712.00, stdev= 0.00, samples=1 00:09:53.830 iops : min= 4678, max= 4678, avg=4678.00, stdev= 0.00, samples=1 00:09:53.830 lat (usec) : 750=0.01% 00:09:53.830 lat (msec) : 10=1.97%, 20=97.95%, 50=0.07% 00:09:53.830 cpu : usr=4.40%, sys=13.70%, ctx=278, majf=0, minf=19 00:09:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.830 issued rwts: total=4608,4846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.830 00:09:53.830 Run status group 0 (all jobs): 00:09:53.830 READ: bw=63.7MiB/s (66.8MB/s), 12.0MiB/s-20.5MiB/s (12.5MB/s-21.4MB/s), io=64.2MiB (67.3MB), run=1001-1007msec 00:09:53.830 WRITE: bw=68.2MiB/s (71.5MB/s), 13.7MiB/s-22.0MiB/s (14.3MB/s-23.0MB/s), io=68.7MiB (72.0MB), run=1001-1007msec 00:09:53.830 00:09:53.830 Disk stats (read/write): 00:09:53.830 nvme0n1: ios=4653/4640, merge=0/0, ticks=27121/22189, in_queue=49310, util=87.78% 00:09:53.830 nvme0n2: ios=3119/3303, merge=0/0, ticks=13440/11157, in_queue=24597, util=88.37% 00:09:53.830 nvme0n3: ios=2751/3072, merge=0/0, ticks=19129/16973, in_queue=36102, util=88.95% 00:09:53.830 nvme0n4: ios=3928/4096, merge=0/0, ticks=26499/22052, in_queue=48551, util=89.72% 00:09:53.830 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:53.831 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67348 00:09:53.831 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:53.831 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:53.831 [global] 00:09:53.831 thread=1 00:09:53.831 invalidate=1 00:09:53.831 rw=read 00:09:53.831 time_based=1 00:09:53.831 runtime=10 00:09:53.831 ioengine=libaio 00:09:53.831 direct=1 00:09:53.831 bs=4096 00:09:53.831 iodepth=1 00:09:53.831 norandommap=1 00:09:53.831 numjobs=1 00:09:53.831 00:09:53.831 [job0] 00:09:53.831 filename=/dev/nvme0n1 00:09:53.831 [job1] 00:09:53.831 filename=/dev/nvme0n2 00:09:53.831 [job2] 00:09:53.831 filename=/dev/nvme0n3 00:09:53.831 [job3] 00:09:53.831 filename=/dev/nvme0n4 00:09:53.831 Could not set queue depth (nvme0n1) 00:09:53.831 Could not set queue depth (nvme0n2) 00:09:53.831 Could not set queue depth (nvme0n3) 00:09:53.831 Could not set queue depth (nvme0n4) 00:09:53.831 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.831 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.831 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.831 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.831 fio-3.35 00:09:53.831 Starting 4 threads 00:09:57.117 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:57.117 fio: pid=67397, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.117 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37056512, buflen=4096 00:09:57.117 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:57.117 fio: pid=67396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.117 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43577344, buflen=4096 00:09:57.117 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.117 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:57.377 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1269760, buflen=4096 00:09:57.377 fio: pid=67394, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.644 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.644 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:57.644 fio: pid=67395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.644 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=21594112, buflen=4096 00:09:57.903 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.903 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:57.903 00:09:57.903 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67394: Tue Dec 10 14:15:22 2024 00:09:57.903 read: IOPS=4757, BW=18.6MiB/s (19.5MB/s)(65.2MiB/3509msec) 00:09:57.903 slat (usec): min=8, max=17408, avg=16.68, stdev=177.11 00:09:57.903 clat (usec): min=129, max=2762, avg=192.15, stdev=80.32 00:09:57.903 lat (usec): min=140, max=17579, avg=208.83, stdev=194.93 00:09:57.903 clat percentiles (usec): 00:09:57.903 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:09:57.903 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 163], 00:09:57.903 | 70.00th=[ 169], 80.00th=[ 219], 90.00th=[ 338], 95.00th=[ 359], 00:09:57.903 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 537], 99.95th=[ 766], 00:09:57.903 | 99.99th=[ 2024] 00:09:57.903 bw ( KiB/s): min=10888, max=23968, per=30.85%, avg=18920.33, stdev=6104.02, samples=6 00:09:57.903 iops : min= 2722, max= 5992, avg=4730.00, stdev=1525.95, samples=6 00:09:57.903 lat (usec) : 250=81.84%, 500=18.03%, 750=0.07%, 1000=0.02% 00:09:57.903 lat (msec) : 2=0.02%, 4=0.01% 00:09:57.903 cpu : usr=1.37%, sys=6.21%, ctx=16702, majf=0, minf=1 00:09:57.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.903 issued rwts: total=16695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.903 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67395: Tue Dec 10 14:15:22 2024 00:09:57.903 read: IOPS=5721, BW=22.3MiB/s (23.4MB/s)(84.6MiB/3785msec) 00:09:57.903 slat (usec): min=9, max=10860, avg=15.10, stdev=138.33 00:09:57.903 clat (usec): min=123, max=2916, avg=158.47, stdev=37.15 00:09:57.903 lat (usec): min=134, max=11178, avg=173.58, stdev=145.37 00:09:57.903 clat percentiles (usec): 00:09:57.903 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:57.903 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:09:57.903 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 198], 00:09:57.903 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 306], 99.95th=[ 644], 00:09:57.903 | 99.99th=[ 1762] 00:09:57.903 bw ( KiB/s): min=20138, max=24080, per=37.40%, avg=22941.71, stdev=1349.02, samples=7 00:09:57.903 iops : min= 5034, max= 6020, avg=5735.29, stdev=337.43, samples=7 00:09:57.903 lat (usec) : 250=99.73%, 500=0.20%, 750=0.02%, 1000=0.01% 00:09:57.903 lat (msec) : 2=0.02%, 4=0.01% 00:09:57.903 cpu : usr=1.64%, sys=6.63%, ctx=21667, majf=0, minf=2 00:09:57.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.903 issued rwts: total=21657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.903 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67396: Tue Dec 10 14:15:22 2024 00:09:57.903 read: IOPS=3271, BW=12.8MiB/s (13.4MB/s)(41.6MiB/3252msec) 00:09:57.903 slat (usec): min=9, max=10281, avg=17.04, stdev=125.15 00:09:57.903 clat (usec): min=147, max=3948, avg=287.10, stdev=91.09 00:09:57.903 lat (usec): min=160, max=10471, avg=304.14, stdev=154.18 00:09:57.903 clat percentiles (usec): 00:09:57.903 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 265], 00:09:57.903 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:09:57.903 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 371], 00:09:57.903 | 99.00th=[ 416], 99.50th=[ 486], 99.90th=[ 1004], 99.95th=[ 2147], 00:09:57.903 | 99.99th=[ 3851] 00:09:57.903 bw ( KiB/s): min=10896, max=13662, per=20.54%, avg=12600.67, stdev=1159.53, samples=6 00:09:57.903 iops : min= 2724, max= 3415, avg=3150.00, stdev=289.89, samples=6 00:09:57.903 lat (usec) : 250=15.03%, 500=84.54%, 750=0.27%, 1000=0.05% 00:09:57.903 lat (msec) : 2=0.05%, 4=0.06% 00:09:57.904 cpu : usr=1.14%, sys=4.21%, ctx=10646, majf=0, minf=2 00:09:57.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.904 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.904 issued rwts: total=10640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.904 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67397: Tue Dec 10 14:15:22 2024 00:09:57.904 read: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(35.3MiB/2956msec) 00:09:57.904 slat (nsec): min=12003, max=81690, avg=18204.12, stdev=6187.24 00:09:57.904 clat (usec): min=160, max=2550, avg=306.77, stdev=55.78 00:09:57.904 lat (usec): min=175, max=2580, avg=324.97, stdev=58.73 00:09:57.904 clat percentiles (usec): 00:09:57.904 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:09:57.904 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:09:57.904 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 388], 00:09:57.904 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 676], 99.95th=[ 914], 00:09:57.904 | 99.99th=[ 2540] 00:09:57.904 bw ( KiB/s): min=10376, max=13384, per=19.79%, avg=12140.80, stdev=1302.37, samples=5 00:09:57.904 iops : min= 2594, max= 3346, avg=3035.20, stdev=325.59, samples=5 00:09:57.904 lat (usec) : 250=1.19%, 500=97.88%, 750=0.83%, 1000=0.06% 00:09:57.904 lat (msec) : 2=0.02%, 4=0.01% 00:09:57.904 cpu : usr=1.18%, sys=4.67%, ctx=9060, majf=0, minf=2 00:09:57.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.904 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.904 issued rwts: total=9048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.904 00:09:57.904 Run status group 0 (all jobs): 00:09:57.904 READ: bw=59.9MiB/s (62.8MB/s), 12.0MiB/s-22.3MiB/s (12.5MB/s-23.4MB/s), io=227MiB (238MB), run=2956-3785msec 00:09:57.904 00:09:57.904 Disk stats (read/write): 00:09:57.904 nvme0n1: ios=16068/0, merge=0/0, ticks=3089/0, in_queue=3089, util=95.19% 00:09:57.904 nvme0n2: ios=20630/0, merge=0/0, ticks=3324/0, in_queue=3324, util=95.58% 00:09:57.904 nvme0n3: ios=9929/0, merge=0/0, ticks=2909/0, in_queue=2909, util=96.24% 00:09:57.904 nvme0n4: ios=8761/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.76% 00:09:58.162 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.162 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:58.421 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.421 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:58.680 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.680 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:58.939 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.939 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:59.197 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:59.197 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67348 00:09:59.197 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:59.197 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.197 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.198 nvmf hotplug test: fio failed as expected 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:59.198 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.456 rmmod nvme_tcp 00:09:59.456 rmmod nvme_fabrics 00:09:59.456 rmmod nvme_keyring 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:59.456 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66974 ']' 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66974 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66974 ']' 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66974 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66974 00:09:59.457 killing process with pid 66974 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66974' 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66974 00:09:59.457 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66974 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.716 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:59.975 00:09:59.975 real 0m19.478s 00:09:59.975 user 1m12.784s 00:09:59.975 sys 0m10.414s 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.975 ************************************ 00:09:59.975 END TEST nvmf_fio_target 00:09:59.975 ************************************ 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.975 ************************************ 00:09:59.975 START TEST nvmf_bdevio 00:09:59.975 ************************************ 00:09:59.975 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:59.975 * Looking for test storage... 00:10:00.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.234 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.235 --rc genhtml_branch_coverage=1 00:10:00.235 --rc genhtml_function_coverage=1 00:10:00.235 --rc genhtml_legend=1 00:10:00.235 --rc geninfo_all_blocks=1 00:10:00.235 --rc geninfo_unexecuted_blocks=1 00:10:00.235 00:10:00.235 ' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.235 --rc genhtml_branch_coverage=1 00:10:00.235 --rc genhtml_function_coverage=1 00:10:00.235 --rc genhtml_legend=1 00:10:00.235 --rc geninfo_all_blocks=1 00:10:00.235 --rc geninfo_unexecuted_blocks=1 00:10:00.235 00:10:00.235 ' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.235 --rc genhtml_branch_coverage=1 00:10:00.235 --rc genhtml_function_coverage=1 00:10:00.235 --rc genhtml_legend=1 00:10:00.235 --rc geninfo_all_blocks=1 00:10:00.235 --rc geninfo_unexecuted_blocks=1 00:10:00.235 00:10:00.235 ' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.235 --rc genhtml_branch_coverage=1 00:10:00.235 --rc genhtml_function_coverage=1 00:10:00.235 --rc genhtml_legend=1 00:10:00.235 --rc geninfo_all_blocks=1 00:10:00.235 --rc geninfo_unexecuted_blocks=1 00:10:00.235 00:10:00.235 ' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.235 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.235 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:00.236 Cannot find device "nvmf_init_br" 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:00.236 Cannot find device "nvmf_init_br2" 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:00.236 Cannot find device "nvmf_tgt_br" 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.236 Cannot find device "nvmf_tgt_br2" 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:00.236 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:00.236 Cannot find device "nvmf_init_br" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:00.236 Cannot find device "nvmf_init_br2" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:00.236 Cannot find device "nvmf_tgt_br" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:00.236 Cannot find device "nvmf_tgt_br2" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:00.236 Cannot find device "nvmf_br" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:00.236 Cannot find device "nvmf_init_if" 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:00.236 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:00.493 Cannot find device "nvmf_init_if2" 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.493 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:00.494 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:10:00.752 00:10:00.752 --- 10.0.0.3 ping statistics --- 00:10:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.752 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:00.752 00:10:00.752 --- 10.0.0.4 ping statistics --- 00:10:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.752 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:00.752 00:10:00.752 --- 10.0.0.1 ping statistics --- 00:10:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.752 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:00.752 00:10:00.752 --- 10.0.0.2 ping statistics --- 00:10:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.752 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67715 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67715 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67715 ']' 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.752 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.752 [2024-12-10 14:15:25.467361] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:10:00.752 [2024-12-10 14:15:25.467455] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.011 [2024-12-10 14:15:25.615530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.011 [2024-12-10 14:15:25.646354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.011 [2024-12-10 14:15:25.646413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.011 [2024-12-10 14:15:25.646422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.011 [2024-12-10 14:15:25.646430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.011 [2024-12-10 14:15:25.646436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.011 [2024-12-10 14:15:25.647560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.011 [2024-12-10 14:15:25.647708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:01.011 [2024-12-10 14:15:25.648270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:01.011 [2024-12-10 14:15:25.648274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.011 [2024-12-10 14:15:25.677066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.011 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 [2024-12-10 14:15:25.774466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 Malloc0 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 [2024-12-10 14:15:25.837238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.012 { 00:10:01.012 "params": { 00:10:01.012 "name": "Nvme$subsystem", 00:10:01.012 "trtype": "$TEST_TRANSPORT", 00:10:01.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.012 "adrfam": "ipv4", 00:10:01.012 "trsvcid": "$NVMF_PORT", 00:10:01.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.012 "hdgst": ${hdgst:-false}, 00:10:01.012 "ddgst": ${ddgst:-false} 00:10:01.012 }, 00:10:01.012 "method": "bdev_nvme_attach_controller" 00:10:01.012 } 00:10:01.012 EOF 00:10:01.012 )") 00:10:01.012 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:01.271 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:01.271 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:01.271 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.271 "params": { 00:10:01.271 "name": "Nvme1", 00:10:01.271 "trtype": "tcp", 00:10:01.271 "traddr": "10.0.0.3", 00:10:01.271 "adrfam": "ipv4", 00:10:01.271 "trsvcid": "4420", 00:10:01.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.271 "hdgst": false, 00:10:01.271 "ddgst": false 00:10:01.271 }, 00:10:01.271 "method": "bdev_nvme_attach_controller" 00:10:01.271 }' 00:10:01.271 [2024-12-10 14:15:25.892390] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:10:01.271 [2024-12-10 14:15:25.892460] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67744 ] 00:10:01.271 [2024-12-10 14:15:26.033096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.271 [2024-12-10 14:15:26.066169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.271 [2024-12-10 14:15:26.066312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.271 [2024-12-10 14:15:26.066316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.271 [2024-12-10 14:15:26.103735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.530 I/O targets: 00:10:01.530 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:01.530 00:10:01.530 00:10:01.530 CUnit - A unit testing framework for C - Version 2.1-3 00:10:01.530 http://cunit.sourceforge.net/ 00:10:01.530 00:10:01.530 00:10:01.530 Suite: bdevio tests on: Nvme1n1 00:10:01.530 Test: blockdev write read block ...passed 00:10:01.530 Test: blockdev write zeroes read block ...passed 00:10:01.530 Test: blockdev write zeroes read no split ...passed 00:10:01.530 Test: blockdev write zeroes read split ...passed 00:10:01.530 Test: blockdev write zeroes read split partial ...passed 00:10:01.530 Test: blockdev reset ...[2024-12-10 14:15:26.236389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:01.530 [2024-12-10 14:15:26.236492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd52b80 (9): Bad file descriptor 00:10:01.530 [2024-12-10 14:15:26.253562] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:01.530 passed 00:10:01.530 Test: blockdev write read 8 blocks ...passed 00:10:01.530 Test: blockdev write read size > 128k ...passed 00:10:01.530 Test: blockdev write read invalid size ...passed 00:10:01.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.530 Test: blockdev write read max offset ...passed 00:10:01.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.530 Test: blockdev writev readv 8 blocks ...passed 00:10:01.530 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.530 Test: blockdev writev readv block ...passed 00:10:01.530 Test: blockdev writev readv size > 128k ...passed 00:10:01.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.530 Test: blockdev comparev and writev ...[2024-12-10 14:15:26.264093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.264149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.264176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.264189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.264611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.264645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.264668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.264680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.265058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.265090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.265111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.265124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.265521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.265553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:01.530 [2024-12-10 14:15:26.265575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.530 [2024-12-10 14:15:26.265587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:01.530 passed 00:10:01.530 Test: blockdev nvme passthru rw ...passed 00:10:01.530 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:15:26.270106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.531 [2024-12-10 14:15:26.270139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:01.531 [2024-12-10 14:15:26.270273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.531 [2024-12-10 14:15:26.270323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:01.531 [2024-12-10 14:15:26.270466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.531 [2024-12-10 14:15:26.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:01.531 [2024-12-10 14:15:26.270641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.531 [2024-12-10 14:15:26.270685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:01.531 passed 00:10:01.531 Test: blockdev nvme admin passthru ...passed 00:10:01.531 Test: blockdev copy ...passed 00:10:01.531 00:10:01.531 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.531 suites 1 1 n/a 0 0 00:10:01.531 tests 23 23 23 0 0 00:10:01.531 asserts 152 152 152 0 n/a 00:10:01.531 00:10:01.531 Elapsed time = 0.167 seconds 00:10:01.873 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.873 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.874 rmmod nvme_tcp 00:10:01.874 rmmod nvme_fabrics 00:10:01.874 rmmod nvme_keyring 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67715 ']' 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67715 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67715 ']' 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67715 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67715 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:01.874 killing process with pid 67715 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67715' 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67715 00:10:01.874 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67715 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:02.144 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.145 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.404 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:02.404 00:10:02.404 real 0m2.259s 00:10:02.404 user 0m5.385s 00:10:02.404 sys 0m0.782s 00:10:02.404 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.404 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.404 ************************************ 00:10:02.404 END TEST nvmf_bdevio 00:10:02.404 ************************************ 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:02.404 00:10:02.404 real 2m28.913s 00:10:02.404 user 6m27.379s 00:10:02.404 sys 0m53.590s 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.404 ************************************ 00:10:02.404 END TEST nvmf_target_core 00:10:02.404 ************************************ 00:10:02.404 14:15:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:02.404 14:15:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.404 14:15:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.404 14:15:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:02.404 ************************************ 00:10:02.404 START TEST nvmf_target_extra 00:10:02.404 ************************************ 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:02.404 * Looking for test storage... 00:10:02.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.404 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.665 --rc genhtml_branch_coverage=1 00:10:02.665 --rc genhtml_function_coverage=1 00:10:02.665 --rc genhtml_legend=1 00:10:02.665 --rc geninfo_all_blocks=1 00:10:02.665 --rc geninfo_unexecuted_blocks=1 00:10:02.665 00:10:02.665 ' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.665 --rc genhtml_branch_coverage=1 00:10:02.665 --rc genhtml_function_coverage=1 00:10:02.665 --rc genhtml_legend=1 00:10:02.665 --rc geninfo_all_blocks=1 00:10:02.665 --rc geninfo_unexecuted_blocks=1 00:10:02.665 00:10:02.665 ' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.665 --rc genhtml_branch_coverage=1 00:10:02.665 --rc genhtml_function_coverage=1 00:10:02.665 --rc genhtml_legend=1 00:10:02.665 --rc geninfo_all_blocks=1 00:10:02.665 --rc geninfo_unexecuted_blocks=1 00:10:02.665 00:10:02.665 ' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.665 --rc genhtml_branch_coverage=1 00:10:02.665 --rc genhtml_function_coverage=1 00:10:02.665 --rc genhtml_legend=1 00:10:02.665 --rc geninfo_all_blocks=1 00:10:02.665 --rc geninfo_unexecuted_blocks=1 00:10:02.665 00:10:02.665 ' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:02.665 ************************************ 00:10:02.665 START TEST nvmf_auth_target 00:10:02.665 ************************************ 00:10:02.665 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:02.665 * Looking for test storage... 00:10:02.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.666 --rc genhtml_branch_coverage=1 00:10:02.666 --rc genhtml_function_coverage=1 00:10:02.666 --rc genhtml_legend=1 00:10:02.666 --rc geninfo_all_blocks=1 00:10:02.666 --rc geninfo_unexecuted_blocks=1 00:10:02.666 00:10:02.666 ' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.666 --rc genhtml_branch_coverage=1 00:10:02.666 --rc genhtml_function_coverage=1 00:10:02.666 --rc genhtml_legend=1 00:10:02.666 --rc geninfo_all_blocks=1 00:10:02.666 --rc geninfo_unexecuted_blocks=1 00:10:02.666 00:10:02.666 ' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.666 --rc genhtml_branch_coverage=1 00:10:02.666 --rc genhtml_function_coverage=1 00:10:02.666 --rc genhtml_legend=1 00:10:02.666 --rc geninfo_all_blocks=1 00:10:02.666 --rc geninfo_unexecuted_blocks=1 00:10:02.666 00:10:02.666 ' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.666 --rc genhtml_branch_coverage=1 00:10:02.666 --rc genhtml_function_coverage=1 00:10:02.666 --rc genhtml_legend=1 00:10:02.666 --rc geninfo_all_blocks=1 00:10:02.666 --rc geninfo_unexecuted_blocks=1 00:10:02.666 00:10:02.666 ' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.666 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:02.666 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:02.667 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:02.926 Cannot find device "nvmf_init_br" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:02.926 Cannot find device "nvmf_init_br2" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:02.926 Cannot find device "nvmf_tgt_br" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:02.926 Cannot find device "nvmf_tgt_br2" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:02.926 Cannot find device "nvmf_init_br" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:02.926 Cannot find device "nvmf_init_br2" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:02.926 Cannot find device "nvmf_tgt_br" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:02.926 Cannot find device "nvmf_tgt_br2" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:02.926 Cannot find device "nvmf_br" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:02.926 Cannot find device "nvmf_init_if" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:02.926 Cannot find device "nvmf_init_if2" 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:02.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:02.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:02.926 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:03.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:10:03.186 00:10:03.186 --- 10.0.0.3 ping statistics --- 00:10:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.186 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:03.186 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:03.186 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:03.186 00:10:03.186 --- 10.0.0.4 ping statistics --- 00:10:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.186 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:10:03.186 00:10:03.186 --- 10.0.0.1 ping statistics --- 00:10:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.186 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:03.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:03.186 00:10:03.186 --- 10.0.0.2 ping statistics --- 00:10:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.186 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=68028 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 68028 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68028 ']' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.186 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.445 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.445 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:03.445 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.445 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.445 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=68047 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87d59239c96c281dd156bd9397ea20a70e7bb7c181c5ebee 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.12R 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87d59239c96c281dd156bd9397ea20a70e7bb7c181c5ebee 0 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87d59239c96c281dd156bd9397ea20a70e7bb7c181c5ebee 0 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87d59239c96c281dd156bd9397ea20a70e7bb7c181c5ebee 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.12R 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.12R 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.12R 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:03.704 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8ee36d95bda84448960a7625d6cb341b449cd53480db0721309b3da622d392df 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wUS 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8ee36d95bda84448960a7625d6cb341b449cd53480db0721309b3da622d392df 3 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8ee36d95bda84448960a7625d6cb341b449cd53480db0721309b3da622d392df 3 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8ee36d95bda84448960a7625d6cb341b449cd53480db0721309b3da622d392df 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wUS 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wUS 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wUS 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=be9396f605c542076138f03b84adda83 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.odY 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key be9396f605c542076138f03b84adda83 1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 be9396f605c542076138f03b84adda83 1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=be9396f605c542076138f03b84adda83 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.odY 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.odY 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.odY 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ecd7193b536d56381c3cc276bf4fae8a1de9fd6eb02447f8 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WSz 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ecd7193b536d56381c3cc276bf4fae8a1de9fd6eb02447f8 2 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ecd7193b536d56381c3cc276bf4fae8a1de9fd6eb02447f8 2 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ecd7193b536d56381c3cc276bf4fae8a1de9fd6eb02447f8 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:03.705 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WSz 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WSz 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.WSz 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3b4ef3d2b3d405ca37a400370fc8059071fecbd4f62aa0b 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.X20 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3b4ef3d2b3d405ca37a400370fc8059071fecbd4f62aa0b 2 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3b4ef3d2b3d405ca37a400370fc8059071fecbd4f62aa0b 2 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3b4ef3d2b3d405ca37a400370fc8059071fecbd4f62aa0b 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.X20 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.X20 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.X20 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6d28903792589b65ef860656b4f524a4 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HLa 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6d28903792589b65ef860656b4f524a4 1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6d28903792589b65ef860656b4f524a4 1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6d28903792589b65ef860656b4f524a4 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HLa 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HLa 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.HLa 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d653f0128eef18bab7d74ba6a3c8823f440ffe963d84240ff789c932b6d0daf8 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4at 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d653f0128eef18bab7d74ba6a3c8823f440ffe963d84240ff789c932b6d0daf8 3 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d653f0128eef18bab7d74ba6a3c8823f440ffe963d84240ff789c932b6d0daf8 3 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:03.964 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d653f0128eef18bab7d74ba6a3c8823f440ffe963d84240ff789c932b6d0daf8 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4at 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4at 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.4at 00:10:03.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 68028 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68028 ']' 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.965 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 68047 /var/tmp/host.sock 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68047 ']' 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.531 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.12R 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.12R 00:10:04.789 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.12R 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wUS ]] 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wUS 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wUS 00:10:05.047 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wUS 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.odY 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.odY 00:10:05.304 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.odY 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.WSz ]] 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WSz 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WSz 00:10:05.562 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WSz 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.X20 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.X20 00:10:05.820 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.X20 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.HLa ]] 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HLa 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HLa 00:10:06.078 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HLa 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4at 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4at 00:10:06.336 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4at 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.594 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:06.852 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.110 00:10:07.110 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.110 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.110 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.368 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.368 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.368 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.368 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.627 { 00:10:07.627 "cntlid": 1, 00:10:07.627 "qid": 0, 00:10:07.627 "state": "enabled", 00:10:07.627 "thread": "nvmf_tgt_poll_group_000", 00:10:07.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:07.627 "listen_address": { 00:10:07.627 "trtype": "TCP", 00:10:07.627 "adrfam": "IPv4", 00:10:07.627 "traddr": "10.0.0.3", 00:10:07.627 "trsvcid": "4420" 00:10:07.627 }, 00:10:07.627 "peer_address": { 00:10:07.627 "trtype": "TCP", 00:10:07.627 "adrfam": "IPv4", 00:10:07.627 "traddr": "10.0.0.1", 00:10:07.627 "trsvcid": "50958" 00:10:07.627 }, 00:10:07.627 "auth": { 00:10:07.627 "state": "completed", 00:10:07.627 "digest": "sha256", 00:10:07.627 "dhgroup": "null" 00:10:07.627 } 00:10:07.627 } 00:10:07.627 ]' 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.627 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.885 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:07.885 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.150 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.151 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:13.151 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.151 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.151 { 00:10:13.151 "cntlid": 3, 00:10:13.151 "qid": 0, 00:10:13.151 "state": "enabled", 00:10:13.151 "thread": "nvmf_tgt_poll_group_000", 00:10:13.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:13.151 "listen_address": { 00:10:13.151 "trtype": "TCP", 00:10:13.151 "adrfam": "IPv4", 00:10:13.151 "traddr": "10.0.0.3", 00:10:13.151 "trsvcid": "4420" 00:10:13.151 }, 00:10:13.151 "peer_address": { 00:10:13.151 "trtype": "TCP", 00:10:13.151 "adrfam": "IPv4", 00:10:13.151 "traddr": "10.0.0.1", 00:10:13.151 "trsvcid": "48866" 00:10:13.151 }, 00:10:13.151 "auth": { 00:10:13.151 "state": "completed", 00:10:13.151 "digest": "sha256", 00:10:13.151 "dhgroup": "null" 00:10:13.151 } 00:10:13.151 } 00:10:13.151 ]' 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:13.151 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.408 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.408 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.408 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.666 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:13.666 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.601 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.168 00:10:15.168 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.168 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.168 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.426 { 00:10:15.426 "cntlid": 5, 00:10:15.426 "qid": 0, 00:10:15.426 "state": "enabled", 00:10:15.426 "thread": "nvmf_tgt_poll_group_000", 00:10:15.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:15.426 "listen_address": { 00:10:15.426 "trtype": "TCP", 00:10:15.426 "adrfam": "IPv4", 00:10:15.426 "traddr": "10.0.0.3", 00:10:15.426 "trsvcid": "4420" 00:10:15.426 }, 00:10:15.426 "peer_address": { 00:10:15.426 "trtype": "TCP", 00:10:15.426 "adrfam": "IPv4", 00:10:15.426 "traddr": "10.0.0.1", 00:10:15.426 "trsvcid": "48888" 00:10:15.426 }, 00:10:15.426 "auth": { 00:10:15.426 "state": "completed", 00:10:15.426 "digest": "sha256", 00:10:15.426 "dhgroup": "null" 00:10:15.426 } 00:10:15.426 } 00:10:15.426 ]' 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.426 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.993 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:15.993 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:16.560 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.819 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.077 00:10:17.077 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.077 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.077 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.336 { 00:10:17.336 "cntlid": 7, 00:10:17.336 "qid": 0, 00:10:17.336 "state": "enabled", 00:10:17.336 "thread": "nvmf_tgt_poll_group_000", 00:10:17.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:17.336 "listen_address": { 00:10:17.336 "trtype": "TCP", 00:10:17.336 "adrfam": "IPv4", 00:10:17.336 "traddr": "10.0.0.3", 00:10:17.336 "trsvcid": "4420" 00:10:17.336 }, 00:10:17.336 "peer_address": { 00:10:17.336 "trtype": "TCP", 00:10:17.336 "adrfam": "IPv4", 00:10:17.336 "traddr": "10.0.0.1", 00:10:17.336 "trsvcid": "48912" 00:10:17.336 }, 00:10:17.336 "auth": { 00:10:17.336 "state": "completed", 00:10:17.336 "digest": "sha256", 00:10:17.336 "dhgroup": "null" 00:10:17.336 } 00:10:17.336 } 00:10:17.336 ]' 00:10:17.336 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.594 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.865 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:17.865 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.447 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.014 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.273 00:10:19.273 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.273 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.273 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.531 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.531 { 00:10:19.531 "cntlid": 9, 00:10:19.531 "qid": 0, 00:10:19.531 "state": "enabled", 00:10:19.531 "thread": "nvmf_tgt_poll_group_000", 00:10:19.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:19.531 "listen_address": { 00:10:19.531 "trtype": "TCP", 00:10:19.531 "adrfam": "IPv4", 00:10:19.531 "traddr": "10.0.0.3", 00:10:19.531 "trsvcid": "4420" 00:10:19.531 }, 00:10:19.531 "peer_address": { 00:10:19.531 "trtype": "TCP", 00:10:19.531 "adrfam": "IPv4", 00:10:19.531 "traddr": "10.0.0.1", 00:10:19.531 "trsvcid": "48960" 00:10:19.531 }, 00:10:19.531 "auth": { 00:10:19.531 "state": "completed", 00:10:19.531 "digest": "sha256", 00:10:19.531 "dhgroup": "ffdhe2048" 00:10:19.532 } 00:10:19.532 } 00:10:19.532 ]' 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.532 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.791 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:19.791 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.726 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.293 00:10:21.293 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.293 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.293 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.293 { 00:10:21.293 "cntlid": 11, 00:10:21.293 "qid": 0, 00:10:21.293 "state": "enabled", 00:10:21.293 "thread": "nvmf_tgt_poll_group_000", 00:10:21.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:21.293 "listen_address": { 00:10:21.293 "trtype": "TCP", 00:10:21.293 "adrfam": "IPv4", 00:10:21.293 "traddr": "10.0.0.3", 00:10:21.293 "trsvcid": "4420" 00:10:21.293 }, 00:10:21.293 "peer_address": { 00:10:21.293 "trtype": "TCP", 00:10:21.293 "adrfam": "IPv4", 00:10:21.293 "traddr": "10.0.0.1", 00:10:21.293 "trsvcid": "52716" 00:10:21.293 }, 00:10:21.293 "auth": { 00:10:21.293 "state": "completed", 00:10:21.293 "digest": "sha256", 00:10:21.293 "dhgroup": "ffdhe2048" 00:10:21.293 } 00:10:21.293 } 00:10:21.293 ]' 00:10:21.293 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.552 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.810 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:21.810 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.377 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.636 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.895 00:10:22.895 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.895 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.895 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.154 { 00:10:23.154 "cntlid": 13, 00:10:23.154 "qid": 0, 00:10:23.154 "state": "enabled", 00:10:23.154 "thread": "nvmf_tgt_poll_group_000", 00:10:23.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:23.154 "listen_address": { 00:10:23.154 "trtype": "TCP", 00:10:23.154 "adrfam": "IPv4", 00:10:23.154 "traddr": "10.0.0.3", 00:10:23.154 "trsvcid": "4420" 00:10:23.154 }, 00:10:23.154 "peer_address": { 00:10:23.154 "trtype": "TCP", 00:10:23.154 "adrfam": "IPv4", 00:10:23.154 "traddr": "10.0.0.1", 00:10:23.154 "trsvcid": "52744" 00:10:23.154 }, 00:10:23.154 "auth": { 00:10:23.154 "state": "completed", 00:10:23.154 "digest": "sha256", 00:10:23.154 "dhgroup": "ffdhe2048" 00:10:23.154 } 00:10:23.154 } 00:10:23.154 ]' 00:10:23.154 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.413 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.671 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:23.671 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:24.237 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:24.495 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.754 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:25.013 00:10:25.013 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.013 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.013 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.272 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.272 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.272 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.272 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.272 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.272 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.272 { 00:10:25.272 "cntlid": 15, 00:10:25.272 "qid": 0, 00:10:25.272 "state": "enabled", 00:10:25.272 "thread": "nvmf_tgt_poll_group_000", 00:10:25.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:25.272 "listen_address": { 00:10:25.272 "trtype": "TCP", 00:10:25.272 "adrfam": "IPv4", 00:10:25.272 "traddr": "10.0.0.3", 00:10:25.272 "trsvcid": "4420" 00:10:25.272 }, 00:10:25.272 "peer_address": { 00:10:25.272 "trtype": "TCP", 00:10:25.272 "adrfam": "IPv4", 00:10:25.272 "traddr": "10.0.0.1", 00:10:25.272 "trsvcid": "52760" 00:10:25.272 }, 00:10:25.272 "auth": { 00:10:25.272 "state": "completed", 00:10:25.272 "digest": "sha256", 00:10:25.272 "dhgroup": "ffdhe2048" 00:10:25.272 } 00:10:25.272 } 00:10:25.272 ]' 00:10:25.272 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.272 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.272 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.531 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:25.531 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.531 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.531 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.531 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.789 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:25.789 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.356 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.614 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.180 00:10:27.180 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.180 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.180 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.439 { 00:10:27.439 "cntlid": 17, 00:10:27.439 "qid": 0, 00:10:27.439 "state": "enabled", 00:10:27.439 "thread": "nvmf_tgt_poll_group_000", 00:10:27.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:27.439 "listen_address": { 00:10:27.439 "trtype": "TCP", 00:10:27.439 "adrfam": "IPv4", 00:10:27.439 "traddr": "10.0.0.3", 00:10:27.439 "trsvcid": "4420" 00:10:27.439 }, 00:10:27.439 "peer_address": { 00:10:27.439 "trtype": "TCP", 00:10:27.439 "adrfam": "IPv4", 00:10:27.439 "traddr": "10.0.0.1", 00:10:27.439 "trsvcid": "52786" 00:10:27.439 }, 00:10:27.439 "auth": { 00:10:27.439 "state": "completed", 00:10:27.439 "digest": "sha256", 00:10:27.439 "dhgroup": "ffdhe3072" 00:10:27.439 } 00:10:27.439 } 00:10:27.439 ]' 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.439 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.025 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:28.025 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.591 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.850 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.108 00:10:29.108 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.108 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.108 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.366 { 00:10:29.366 "cntlid": 19, 00:10:29.366 "qid": 0, 00:10:29.366 "state": "enabled", 00:10:29.366 "thread": "nvmf_tgt_poll_group_000", 00:10:29.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:29.366 "listen_address": { 00:10:29.366 "trtype": "TCP", 00:10:29.366 "adrfam": "IPv4", 00:10:29.366 "traddr": "10.0.0.3", 00:10:29.366 "trsvcid": "4420" 00:10:29.366 }, 00:10:29.366 "peer_address": { 00:10:29.366 "trtype": "TCP", 00:10:29.366 "adrfam": "IPv4", 00:10:29.366 "traddr": "10.0.0.1", 00:10:29.366 "trsvcid": "52822" 00:10:29.366 }, 00:10:29.366 "auth": { 00:10:29.366 "state": "completed", 00:10:29.366 "digest": "sha256", 00:10:29.366 "dhgroup": "ffdhe3072" 00:10:29.366 } 00:10:29.366 } 00:10:29.366 ]' 00:10:29.366 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.625 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.883 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:29.883 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.817 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.075 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.334 00:10:31.334 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.334 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.334 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.592 { 00:10:31.592 "cntlid": 21, 00:10:31.592 "qid": 0, 00:10:31.592 "state": "enabled", 00:10:31.592 "thread": "nvmf_tgt_poll_group_000", 00:10:31.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:31.592 "listen_address": { 00:10:31.592 "trtype": "TCP", 00:10:31.592 "adrfam": "IPv4", 00:10:31.592 "traddr": "10.0.0.3", 00:10:31.592 "trsvcid": "4420" 00:10:31.592 }, 00:10:31.592 "peer_address": { 00:10:31.592 "trtype": "TCP", 00:10:31.592 "adrfam": "IPv4", 00:10:31.592 "traddr": "10.0.0.1", 00:10:31.592 "trsvcid": "45748" 00:10:31.592 }, 00:10:31.592 "auth": { 00:10:31.592 "state": "completed", 00:10:31.592 "digest": "sha256", 00:10:31.592 "dhgroup": "ffdhe3072" 00:10:31.592 } 00:10:31.592 } 00:10:31.592 ]' 00:10:31.592 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.851 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.109 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:32.109 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:33.044 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.044 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:33.044 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.045 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.045 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.045 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.045 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.563 00:10:33.563 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.563 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.563 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.821 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.821 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.821 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.821 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.821 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.822 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.822 { 00:10:33.822 "cntlid": 23, 00:10:33.822 "qid": 0, 00:10:33.822 "state": "enabled", 00:10:33.822 "thread": "nvmf_tgt_poll_group_000", 00:10:33.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:33.822 "listen_address": { 00:10:33.822 "trtype": "TCP", 00:10:33.822 "adrfam": "IPv4", 00:10:33.822 "traddr": "10.0.0.3", 00:10:33.822 "trsvcid": "4420" 00:10:33.822 }, 00:10:33.822 "peer_address": { 00:10:33.822 "trtype": "TCP", 00:10:33.822 "adrfam": "IPv4", 00:10:33.822 "traddr": "10.0.0.1", 00:10:33.822 "trsvcid": "45764" 00:10:33.822 }, 00:10:33.822 "auth": { 00:10:33.822 "state": "completed", 00:10:33.822 "digest": "sha256", 00:10:33.822 "dhgroup": "ffdhe3072" 00:10:33.822 } 00:10:33.822 } 00:10:33.822 ]' 00:10:33.822 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.079 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.338 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:34.338 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.275 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.275 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.841 00:10:35.841 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.841 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.841 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.099 { 00:10:36.099 "cntlid": 25, 00:10:36.099 "qid": 0, 00:10:36.099 "state": "enabled", 00:10:36.099 "thread": "nvmf_tgt_poll_group_000", 00:10:36.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:36.099 "listen_address": { 00:10:36.099 "trtype": "TCP", 00:10:36.099 "adrfam": "IPv4", 00:10:36.099 "traddr": "10.0.0.3", 00:10:36.099 "trsvcid": "4420" 00:10:36.099 }, 00:10:36.099 "peer_address": { 00:10:36.099 "trtype": "TCP", 00:10:36.099 "adrfam": "IPv4", 00:10:36.099 "traddr": "10.0.0.1", 00:10:36.099 "trsvcid": "45786" 00:10:36.099 }, 00:10:36.099 "auth": { 00:10:36.099 "state": "completed", 00:10:36.099 "digest": "sha256", 00:10:36.099 "dhgroup": "ffdhe4096" 00:10:36.099 } 00:10:36.099 } 00:10:36.099 ]' 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.099 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.617 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:36.617 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:37.183 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.183 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.752 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.011 00:10:38.011 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.011 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.011 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.270 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.270 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.270 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.270 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.270 { 00:10:38.270 "cntlid": 27, 00:10:38.270 "qid": 0, 00:10:38.270 "state": "enabled", 00:10:38.270 "thread": "nvmf_tgt_poll_group_000", 00:10:38.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:38.270 "listen_address": { 00:10:38.270 "trtype": "TCP", 00:10:38.270 "adrfam": "IPv4", 00:10:38.270 "traddr": "10.0.0.3", 00:10:38.270 "trsvcid": "4420" 00:10:38.270 }, 00:10:38.270 "peer_address": { 00:10:38.270 "trtype": "TCP", 00:10:38.270 "adrfam": "IPv4", 00:10:38.270 "traddr": "10.0.0.1", 00:10:38.270 "trsvcid": "45818" 00:10:38.270 }, 00:10:38.270 "auth": { 00:10:38.270 "state": "completed", 00:10:38.270 "digest": "sha256", 00:10:38.270 "dhgroup": "ffdhe4096" 00:10:38.270 } 00:10:38.270 } 00:10:38.270 ]' 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:38.270 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.529 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.529 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.529 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.789 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:38.789 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.357 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.615 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.616 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.616 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.616 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.616 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.616 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.182 00:10:40.182 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.182 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.182 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.442 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.442 { 00:10:40.442 "cntlid": 29, 00:10:40.443 "qid": 0, 00:10:40.443 "state": "enabled", 00:10:40.443 "thread": "nvmf_tgt_poll_group_000", 00:10:40.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:40.443 "listen_address": { 00:10:40.443 "trtype": "TCP", 00:10:40.443 "adrfam": "IPv4", 00:10:40.443 "traddr": "10.0.0.3", 00:10:40.443 "trsvcid": "4420" 00:10:40.443 }, 00:10:40.443 "peer_address": { 00:10:40.443 "trtype": "TCP", 00:10:40.443 "adrfam": "IPv4", 00:10:40.443 "traddr": "10.0.0.1", 00:10:40.443 "trsvcid": "45850" 00:10:40.443 }, 00:10:40.443 "auth": { 00:10:40.443 "state": "completed", 00:10:40.443 "digest": "sha256", 00:10:40.443 "dhgroup": "ffdhe4096" 00:10:40.443 } 00:10:40.443 } 00:10:40.443 ]' 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.443 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.703 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:40.703 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:41.638 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.638 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:41.638 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.638 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.638 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:41.639 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.205 00:10:42.205 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.205 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.205 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.464 { 00:10:42.464 "cntlid": 31, 00:10:42.464 "qid": 0, 00:10:42.464 "state": "enabled", 00:10:42.464 "thread": "nvmf_tgt_poll_group_000", 00:10:42.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:42.464 "listen_address": { 00:10:42.464 "trtype": "TCP", 00:10:42.464 "adrfam": "IPv4", 00:10:42.464 "traddr": "10.0.0.3", 00:10:42.464 "trsvcid": "4420" 00:10:42.464 }, 00:10:42.464 "peer_address": { 00:10:42.464 "trtype": "TCP", 00:10:42.464 "adrfam": "IPv4", 00:10:42.464 "traddr": "10.0.0.1", 00:10:42.464 "trsvcid": "35364" 00:10:42.464 }, 00:10:42.464 "auth": { 00:10:42.464 "state": "completed", 00:10:42.464 "digest": "sha256", 00:10:42.464 "dhgroup": "ffdhe4096" 00:10:42.464 } 00:10:42.464 } 00:10:42.464 ]' 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.464 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.031 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:43.031 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.598 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.857 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.424 00:10:44.424 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.424 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.424 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.682 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.682 { 00:10:44.682 "cntlid": 33, 00:10:44.682 "qid": 0, 00:10:44.682 "state": "enabled", 00:10:44.682 "thread": "nvmf_tgt_poll_group_000", 00:10:44.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:44.682 "listen_address": { 00:10:44.683 "trtype": "TCP", 00:10:44.683 "adrfam": "IPv4", 00:10:44.683 "traddr": "10.0.0.3", 00:10:44.683 "trsvcid": "4420" 00:10:44.683 }, 00:10:44.683 "peer_address": { 00:10:44.683 "trtype": "TCP", 00:10:44.683 "adrfam": "IPv4", 00:10:44.683 "traddr": "10.0.0.1", 00:10:44.683 "trsvcid": "35388" 00:10:44.683 }, 00:10:44.683 "auth": { 00:10:44.683 "state": "completed", 00:10:44.683 "digest": "sha256", 00:10:44.683 "dhgroup": "ffdhe6144" 00:10:44.683 } 00:10:44.683 } 00:10:44.683 ]' 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.683 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.940 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:44.941 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:45.563 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.563 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:45.563 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.563 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.822 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.822 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.822 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.822 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.081 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.648 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.648 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.907 { 00:10:46.907 "cntlid": 35, 00:10:46.907 "qid": 0, 00:10:46.907 "state": "enabled", 00:10:46.907 "thread": "nvmf_tgt_poll_group_000", 00:10:46.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:46.907 "listen_address": { 00:10:46.907 "trtype": "TCP", 00:10:46.907 "adrfam": "IPv4", 00:10:46.907 "traddr": "10.0.0.3", 00:10:46.907 "trsvcid": "4420" 00:10:46.907 }, 00:10:46.907 "peer_address": { 00:10:46.907 "trtype": "TCP", 00:10:46.907 "adrfam": "IPv4", 00:10:46.907 "traddr": "10.0.0.1", 00:10:46.907 "trsvcid": "35426" 00:10:46.907 }, 00:10:46.907 "auth": { 00:10:46.907 "state": "completed", 00:10:46.907 "digest": "sha256", 00:10:46.907 "dhgroup": "ffdhe6144" 00:10:46.907 } 00:10:46.907 } 00:10:46.907 ]' 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.907 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.165 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:47.165 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:48.100 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.101 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.667 00:10:48.667 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.667 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.667 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.925 { 00:10:48.925 "cntlid": 37, 00:10:48.925 "qid": 0, 00:10:48.925 "state": "enabled", 00:10:48.925 "thread": "nvmf_tgt_poll_group_000", 00:10:48.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:48.925 "listen_address": { 00:10:48.925 "trtype": "TCP", 00:10:48.925 "adrfam": "IPv4", 00:10:48.925 "traddr": "10.0.0.3", 00:10:48.925 "trsvcid": "4420" 00:10:48.925 }, 00:10:48.925 "peer_address": { 00:10:48.925 "trtype": "TCP", 00:10:48.925 "adrfam": "IPv4", 00:10:48.925 "traddr": "10.0.0.1", 00:10:48.925 "trsvcid": "35454" 00:10:48.925 }, 00:10:48.925 "auth": { 00:10:48.925 "state": "completed", 00:10:48.925 "digest": "sha256", 00:10:48.925 "dhgroup": "ffdhe6144" 00:10:48.925 } 00:10:48.925 } 00:10:48.925 ]' 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.925 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.183 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.183 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.183 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.184 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.184 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.442 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:49.442 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.009 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.575 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.576 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.834 00:10:50.834 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.834 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.834 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.093 { 00:10:51.093 "cntlid": 39, 00:10:51.093 "qid": 0, 00:10:51.093 "state": "enabled", 00:10:51.093 "thread": "nvmf_tgt_poll_group_000", 00:10:51.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:51.093 "listen_address": { 00:10:51.093 "trtype": "TCP", 00:10:51.093 "adrfam": "IPv4", 00:10:51.093 "traddr": "10.0.0.3", 00:10:51.093 "trsvcid": "4420" 00:10:51.093 }, 00:10:51.093 "peer_address": { 00:10:51.093 "trtype": "TCP", 00:10:51.093 "adrfam": "IPv4", 00:10:51.093 "traddr": "10.0.0.1", 00:10:51.093 "trsvcid": "34708" 00:10:51.093 }, 00:10:51.093 "auth": { 00:10:51.093 "state": "completed", 00:10:51.093 "digest": "sha256", 00:10:51.093 "dhgroup": "ffdhe6144" 00:10:51.093 } 00:10:51.093 } 00:10:51.093 ]' 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.093 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.352 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.352 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.352 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.352 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.352 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.611 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:51.611 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.179 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.438 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.373 00:10:53.373 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.373 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.373 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.373 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.373 { 00:10:53.373 "cntlid": 41, 00:10:53.373 "qid": 0, 00:10:53.373 "state": "enabled", 00:10:53.373 "thread": "nvmf_tgt_poll_group_000", 00:10:53.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:53.373 "listen_address": { 00:10:53.373 "trtype": "TCP", 00:10:53.373 "adrfam": "IPv4", 00:10:53.373 "traddr": "10.0.0.3", 00:10:53.373 "trsvcid": "4420" 00:10:53.373 }, 00:10:53.373 "peer_address": { 00:10:53.373 "trtype": "TCP", 00:10:53.373 "adrfam": "IPv4", 00:10:53.373 "traddr": "10.0.0.1", 00:10:53.373 "trsvcid": "34734" 00:10:53.373 }, 00:10:53.373 "auth": { 00:10:53.373 "state": "completed", 00:10:53.373 "digest": "sha256", 00:10:53.373 "dhgroup": "ffdhe8192" 00:10:53.374 } 00:10:53.374 } 00:10:53.374 ]' 00:10:53.374 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.374 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.374 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.632 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.632 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.632 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.632 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.632 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.891 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:53.891 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.459 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.718 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.285 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.544 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.803 { 00:10:55.803 "cntlid": 43, 00:10:55.803 "qid": 0, 00:10:55.803 "state": "enabled", 00:10:55.803 "thread": "nvmf_tgt_poll_group_000", 00:10:55.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:55.803 "listen_address": { 00:10:55.803 "trtype": "TCP", 00:10:55.803 "adrfam": "IPv4", 00:10:55.803 "traddr": "10.0.0.3", 00:10:55.803 "trsvcid": "4420" 00:10:55.803 }, 00:10:55.803 "peer_address": { 00:10:55.803 "trtype": "TCP", 00:10:55.803 "adrfam": "IPv4", 00:10:55.803 "traddr": "10.0.0.1", 00:10:55.803 "trsvcid": "34752" 00:10:55.803 }, 00:10:55.803 "auth": { 00:10:55.803 "state": "completed", 00:10:55.803 "digest": "sha256", 00:10:55.803 "dhgroup": "ffdhe8192" 00:10:55.803 } 00:10:55.803 } 00:10:55.803 ]' 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.803 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.804 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.804 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.804 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.063 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:56.063 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.000 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.259 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.827 00:10:57.827 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.827 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.827 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.087 { 00:10:58.087 "cntlid": 45, 00:10:58.087 "qid": 0, 00:10:58.087 "state": "enabled", 00:10:58.087 "thread": "nvmf_tgt_poll_group_000", 00:10:58.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:10:58.087 "listen_address": { 00:10:58.087 "trtype": "TCP", 00:10:58.087 "adrfam": "IPv4", 00:10:58.087 "traddr": "10.0.0.3", 00:10:58.087 "trsvcid": "4420" 00:10:58.087 }, 00:10:58.087 "peer_address": { 00:10:58.087 "trtype": "TCP", 00:10:58.087 "adrfam": "IPv4", 00:10:58.087 "traddr": "10.0.0.1", 00:10:58.087 "trsvcid": "34780" 00:10:58.087 }, 00:10:58.087 "auth": { 00:10:58.087 "state": "completed", 00:10:58.087 "digest": "sha256", 00:10:58.087 "dhgroup": "ffdhe8192" 00:10:58.087 } 00:10:58.087 } 00:10:58.087 ]' 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.087 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.347 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.347 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.347 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.606 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:58.606 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.174 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.433 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.999 00:10:59.999 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.999 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.999 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.578 { 00:11:00.578 "cntlid": 47, 00:11:00.578 "qid": 0, 00:11:00.578 "state": "enabled", 00:11:00.578 "thread": "nvmf_tgt_poll_group_000", 00:11:00.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:00.578 "listen_address": { 00:11:00.578 "trtype": "TCP", 00:11:00.578 "adrfam": "IPv4", 00:11:00.578 "traddr": "10.0.0.3", 00:11:00.578 "trsvcid": "4420" 00:11:00.578 }, 00:11:00.578 "peer_address": { 00:11:00.578 "trtype": "TCP", 00:11:00.578 "adrfam": "IPv4", 00:11:00.578 "traddr": "10.0.0.1", 00:11:00.578 "trsvcid": "34804" 00:11:00.578 }, 00:11:00.578 "auth": { 00:11:00.578 "state": "completed", 00:11:00.578 "digest": "sha256", 00:11:00.578 "dhgroup": "ffdhe8192" 00:11:00.578 } 00:11:00.578 } 00:11:00.578 ]' 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.578 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.838 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:00.838 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.775 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.776 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.343 00:11:02.343 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.343 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.343 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.602 { 00:11:02.602 "cntlid": 49, 00:11:02.602 "qid": 0, 00:11:02.602 "state": "enabled", 00:11:02.602 "thread": "nvmf_tgt_poll_group_000", 00:11:02.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:02.602 "listen_address": { 00:11:02.602 "trtype": "TCP", 00:11:02.602 "adrfam": "IPv4", 00:11:02.602 "traddr": "10.0.0.3", 00:11:02.602 "trsvcid": "4420" 00:11:02.602 }, 00:11:02.602 "peer_address": { 00:11:02.602 "trtype": "TCP", 00:11:02.602 "adrfam": "IPv4", 00:11:02.602 "traddr": "10.0.0.1", 00:11:02.602 "trsvcid": "46098" 00:11:02.602 }, 00:11:02.602 "auth": { 00:11:02.602 "state": "completed", 00:11:02.602 "digest": "sha384", 00:11:02.602 "dhgroup": "null" 00:11:02.602 } 00:11:02.602 } 00:11:02.602 ]' 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.602 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.861 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:02.861 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.798 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.057 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.317 00:11:04.317 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.317 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.317 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.576 { 00:11:04.576 "cntlid": 51, 00:11:04.576 "qid": 0, 00:11:04.576 "state": "enabled", 00:11:04.576 "thread": "nvmf_tgt_poll_group_000", 00:11:04.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:04.576 "listen_address": { 00:11:04.576 "trtype": "TCP", 00:11:04.576 "adrfam": "IPv4", 00:11:04.576 "traddr": "10.0.0.3", 00:11:04.576 "trsvcid": "4420" 00:11:04.576 }, 00:11:04.576 "peer_address": { 00:11:04.576 "trtype": "TCP", 00:11:04.576 "adrfam": "IPv4", 00:11:04.576 "traddr": "10.0.0.1", 00:11:04.576 "trsvcid": "46138" 00:11:04.576 }, 00:11:04.576 "auth": { 00:11:04.576 "state": "completed", 00:11:04.576 "digest": "sha384", 00:11:04.576 "dhgroup": "null" 00:11:04.576 } 00:11:04.576 } 00:11:04.576 ]' 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.576 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.835 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:04.835 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.835 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.835 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.835 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.094 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:05.094 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.661 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.920 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.488 00:11:06.488 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.488 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.488 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.747 { 00:11:06.747 "cntlid": 53, 00:11:06.747 "qid": 0, 00:11:06.747 "state": "enabled", 00:11:06.747 "thread": "nvmf_tgt_poll_group_000", 00:11:06.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:06.747 "listen_address": { 00:11:06.747 "trtype": "TCP", 00:11:06.747 "adrfam": "IPv4", 00:11:06.747 "traddr": "10.0.0.3", 00:11:06.747 "trsvcid": "4420" 00:11:06.747 }, 00:11:06.747 "peer_address": { 00:11:06.747 "trtype": "TCP", 00:11:06.747 "adrfam": "IPv4", 00:11:06.747 "traddr": "10.0.0.1", 00:11:06.747 "trsvcid": "46180" 00:11:06.747 }, 00:11:06.747 "auth": { 00:11:06.747 "state": "completed", 00:11:06.747 "digest": "sha384", 00:11:06.747 "dhgroup": "null" 00:11:06.747 } 00:11:06.747 } 00:11:06.747 ]' 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.747 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:07.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.945 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.512 00:11:08.512 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.512 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.512 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.771 { 00:11:08.771 "cntlid": 55, 00:11:08.771 "qid": 0, 00:11:08.771 "state": "enabled", 00:11:08.771 "thread": "nvmf_tgt_poll_group_000", 00:11:08.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:08.771 "listen_address": { 00:11:08.771 "trtype": "TCP", 00:11:08.771 "adrfam": "IPv4", 00:11:08.771 "traddr": "10.0.0.3", 00:11:08.771 "trsvcid": "4420" 00:11:08.771 }, 00:11:08.771 "peer_address": { 00:11:08.771 "trtype": "TCP", 00:11:08.771 "adrfam": "IPv4", 00:11:08.771 "traddr": "10.0.0.1", 00:11:08.771 "trsvcid": "46212" 00:11:08.771 }, 00:11:08.771 "auth": { 00:11:08.771 "state": "completed", 00:11:08.771 "digest": "sha384", 00:11:08.771 "dhgroup": "null" 00:11:08.771 } 00:11:08.771 } 00:11:08.771 ]' 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.771 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.030 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:09.030 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:09.598 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.857 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.115 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:10.115 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.115 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.115 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.116 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.377 00:11:10.377 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.377 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.377 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.636 { 00:11:10.636 "cntlid": 57, 00:11:10.636 "qid": 0, 00:11:10.636 "state": "enabled", 00:11:10.636 "thread": "nvmf_tgt_poll_group_000", 00:11:10.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:10.636 "listen_address": { 00:11:10.636 "trtype": "TCP", 00:11:10.636 "adrfam": "IPv4", 00:11:10.636 "traddr": "10.0.0.3", 00:11:10.636 "trsvcid": "4420" 00:11:10.636 }, 00:11:10.636 "peer_address": { 00:11:10.636 "trtype": "TCP", 00:11:10.636 "adrfam": "IPv4", 00:11:10.636 "traddr": "10.0.0.1", 00:11:10.636 "trsvcid": "39422" 00:11:10.636 }, 00:11:10.636 "auth": { 00:11:10.636 "state": "completed", 00:11:10.636 "digest": "sha384", 00:11:10.636 "dhgroup": "ffdhe2048" 00:11:10.636 } 00:11:10.636 } 00:11:10.636 ]' 00:11:10.636 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.895 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.153 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:11.153 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.720 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.979 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.545 00:11:12.546 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.546 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.546 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.804 { 00:11:12.804 "cntlid": 59, 00:11:12.804 "qid": 0, 00:11:12.804 "state": "enabled", 00:11:12.804 "thread": "nvmf_tgt_poll_group_000", 00:11:12.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:12.804 "listen_address": { 00:11:12.804 "trtype": "TCP", 00:11:12.804 "adrfam": "IPv4", 00:11:12.804 "traddr": "10.0.0.3", 00:11:12.804 "trsvcid": "4420" 00:11:12.804 }, 00:11:12.804 "peer_address": { 00:11:12.804 "trtype": "TCP", 00:11:12.804 "adrfam": "IPv4", 00:11:12.804 "traddr": "10.0.0.1", 00:11:12.804 "trsvcid": "39442" 00:11:12.804 }, 00:11:12.804 "auth": { 00:11:12.804 "state": "completed", 00:11:12.804 "digest": "sha384", 00:11:12.804 "dhgroup": "ffdhe2048" 00:11:12.804 } 00:11:12.804 } 00:11:12.804 ]' 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.804 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.063 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:13.063 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:13.998 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.265 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.542 00:11:14.542 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.542 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.542 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.800 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.800 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.800 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.800 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.801 { 00:11:14.801 "cntlid": 61, 00:11:14.801 "qid": 0, 00:11:14.801 "state": "enabled", 00:11:14.801 "thread": "nvmf_tgt_poll_group_000", 00:11:14.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:14.801 "listen_address": { 00:11:14.801 "trtype": "TCP", 00:11:14.801 "adrfam": "IPv4", 00:11:14.801 "traddr": "10.0.0.3", 00:11:14.801 "trsvcid": "4420" 00:11:14.801 }, 00:11:14.801 "peer_address": { 00:11:14.801 "trtype": "TCP", 00:11:14.801 "adrfam": "IPv4", 00:11:14.801 "traddr": "10.0.0.1", 00:11:14.801 "trsvcid": "39472" 00:11:14.801 }, 00:11:14.801 "auth": { 00:11:14.801 "state": "completed", 00:11:14.801 "digest": "sha384", 00:11:14.801 "dhgroup": "ffdhe2048" 00:11:14.801 } 00:11:14.801 } 00:11:14.801 ]' 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.368 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:15.368 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:15.935 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.194 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.453 00:11:16.453 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.453 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.453 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.712 { 00:11:16.712 "cntlid": 63, 00:11:16.712 "qid": 0, 00:11:16.712 "state": "enabled", 00:11:16.712 "thread": "nvmf_tgt_poll_group_000", 00:11:16.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:16.712 "listen_address": { 00:11:16.712 "trtype": "TCP", 00:11:16.712 "adrfam": "IPv4", 00:11:16.712 "traddr": "10.0.0.3", 00:11:16.712 "trsvcid": "4420" 00:11:16.712 }, 00:11:16.712 "peer_address": { 00:11:16.712 "trtype": "TCP", 00:11:16.712 "adrfam": "IPv4", 00:11:16.712 "traddr": "10.0.0.1", 00:11:16.712 "trsvcid": "39486" 00:11:16.712 }, 00:11:16.712 "auth": { 00:11:16.712 "state": "completed", 00:11:16.712 "digest": "sha384", 00:11:16.712 "dhgroup": "ffdhe2048" 00:11:16.712 } 00:11:16.712 } 00:11:16.712 ]' 00:11:16.712 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.971 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.230 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:17.231 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:17.797 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.056 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.316 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.316 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.316 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.316 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.575 00:11:18.575 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.575 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.575 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.834 { 00:11:18.834 "cntlid": 65, 00:11:18.834 "qid": 0, 00:11:18.834 "state": "enabled", 00:11:18.834 "thread": "nvmf_tgt_poll_group_000", 00:11:18.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:18.834 "listen_address": { 00:11:18.834 "trtype": "TCP", 00:11:18.834 "adrfam": "IPv4", 00:11:18.834 "traddr": "10.0.0.3", 00:11:18.834 "trsvcid": "4420" 00:11:18.834 }, 00:11:18.834 "peer_address": { 00:11:18.834 "trtype": "TCP", 00:11:18.834 "adrfam": "IPv4", 00:11:18.834 "traddr": "10.0.0.1", 00:11:18.834 "trsvcid": "39520" 00:11:18.834 }, 00:11:18.834 "auth": { 00:11:18.834 "state": "completed", 00:11:18.834 "digest": "sha384", 00:11:18.834 "dhgroup": "ffdhe3072" 00:11:18.834 } 00:11:18.834 } 00:11:18.834 ]' 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.834 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.092 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.092 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.092 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.092 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.092 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.351 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:19.351 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:19.919 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.178 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.751 00:11:20.751 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.751 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.751 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.010 { 00:11:21.010 "cntlid": 67, 00:11:21.010 "qid": 0, 00:11:21.010 "state": "enabled", 00:11:21.010 "thread": "nvmf_tgt_poll_group_000", 00:11:21.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:21.010 "listen_address": { 00:11:21.010 "trtype": "TCP", 00:11:21.010 "adrfam": "IPv4", 00:11:21.010 "traddr": "10.0.0.3", 00:11:21.010 "trsvcid": "4420" 00:11:21.010 }, 00:11:21.010 "peer_address": { 00:11:21.010 "trtype": "TCP", 00:11:21.010 "adrfam": "IPv4", 00:11:21.010 "traddr": "10.0.0.1", 00:11:21.010 "trsvcid": "34846" 00:11:21.010 }, 00:11:21.010 "auth": { 00:11:21.010 "state": "completed", 00:11:21.010 "digest": "sha384", 00:11:21.010 "dhgroup": "ffdhe3072" 00:11:21.010 } 00:11:21.010 } 00:11:21.010 ]' 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.010 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.269 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:21.269 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.205 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.205 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.774 00:11:22.774 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.774 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.774 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.034 { 00:11:23.034 "cntlid": 69, 00:11:23.034 "qid": 0, 00:11:23.034 "state": "enabled", 00:11:23.034 "thread": "nvmf_tgt_poll_group_000", 00:11:23.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:23.034 "listen_address": { 00:11:23.034 "trtype": "TCP", 00:11:23.034 "adrfam": "IPv4", 00:11:23.034 "traddr": "10.0.0.3", 00:11:23.034 "trsvcid": "4420" 00:11:23.034 }, 00:11:23.034 "peer_address": { 00:11:23.034 "trtype": "TCP", 00:11:23.034 "adrfam": "IPv4", 00:11:23.034 "traddr": "10.0.0.1", 00:11:23.034 "trsvcid": "34866" 00:11:23.034 }, 00:11:23.034 "auth": { 00:11:23.034 "state": "completed", 00:11:23.034 "digest": "sha384", 00:11:23.034 "dhgroup": "ffdhe3072" 00:11:23.034 } 00:11:23.034 } 00:11:23.034 ]' 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.034 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.293 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:23.293 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.229 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.229 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:24.229 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.229 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.229 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.230 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:24.230 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.230 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:24.230 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.230 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.488 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.488 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.488 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.488 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.747 00:11:24.747 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.747 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.747 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.006 { 00:11:25.006 "cntlid": 71, 00:11:25.006 "qid": 0, 00:11:25.006 "state": "enabled", 00:11:25.006 "thread": "nvmf_tgt_poll_group_000", 00:11:25.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:25.006 "listen_address": { 00:11:25.006 "trtype": "TCP", 00:11:25.006 "adrfam": "IPv4", 00:11:25.006 "traddr": "10.0.0.3", 00:11:25.006 "trsvcid": "4420" 00:11:25.006 }, 00:11:25.006 "peer_address": { 00:11:25.006 "trtype": "TCP", 00:11:25.006 "adrfam": "IPv4", 00:11:25.006 "traddr": "10.0.0.1", 00:11:25.006 "trsvcid": "34884" 00:11:25.006 }, 00:11:25.006 "auth": { 00:11:25.006 "state": "completed", 00:11:25.006 "digest": "sha384", 00:11:25.006 "dhgroup": "ffdhe3072" 00:11:25.006 } 00:11:25.006 } 00:11:25.006 ]' 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.006 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.265 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:25.265 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.265 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.265 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.265 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.523 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:25.523 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.100 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.376 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.377 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.945 00:11:26.945 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.945 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.945 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.204 { 00:11:27.204 "cntlid": 73, 00:11:27.204 "qid": 0, 00:11:27.204 "state": "enabled", 00:11:27.204 "thread": "nvmf_tgt_poll_group_000", 00:11:27.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:27.204 "listen_address": { 00:11:27.204 "trtype": "TCP", 00:11:27.204 "adrfam": "IPv4", 00:11:27.204 "traddr": "10.0.0.3", 00:11:27.204 "trsvcid": "4420" 00:11:27.204 }, 00:11:27.204 "peer_address": { 00:11:27.204 "trtype": "TCP", 00:11:27.204 "adrfam": "IPv4", 00:11:27.204 "traddr": "10.0.0.1", 00:11:27.204 "trsvcid": "34890" 00:11:27.204 }, 00:11:27.204 "auth": { 00:11:27.204 "state": "completed", 00:11:27.204 "digest": "sha384", 00:11:27.204 "dhgroup": "ffdhe4096" 00:11:27.204 } 00:11:27.204 } 00:11:27.204 ]' 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.204 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.204 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.204 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.204 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.462 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:27.462 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.398 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.398 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.657 00:11:28.915 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.915 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.915 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.174 { 00:11:29.174 "cntlid": 75, 00:11:29.174 "qid": 0, 00:11:29.174 "state": "enabled", 00:11:29.174 "thread": "nvmf_tgt_poll_group_000", 00:11:29.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:29.174 "listen_address": { 00:11:29.174 "trtype": "TCP", 00:11:29.174 "adrfam": "IPv4", 00:11:29.174 "traddr": "10.0.0.3", 00:11:29.174 "trsvcid": "4420" 00:11:29.174 }, 00:11:29.174 "peer_address": { 00:11:29.174 "trtype": "TCP", 00:11:29.174 "adrfam": "IPv4", 00:11:29.174 "traddr": "10.0.0.1", 00:11:29.174 "trsvcid": "34910" 00:11:29.174 }, 00:11:29.174 "auth": { 00:11:29.174 "state": "completed", 00:11:29.174 "digest": "sha384", 00:11:29.174 "dhgroup": "ffdhe4096" 00:11:29.174 } 00:11:29.174 } 00:11:29.174 ]' 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.174 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.433 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:29.433 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:29.999 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.258 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.515 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.774 00:11:30.774 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.774 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.774 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.032 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.032 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.032 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.032 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.032 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.033 { 00:11:31.033 "cntlid": 77, 00:11:31.033 "qid": 0, 00:11:31.033 "state": "enabled", 00:11:31.033 "thread": "nvmf_tgt_poll_group_000", 00:11:31.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:31.033 "listen_address": { 00:11:31.033 "trtype": "TCP", 00:11:31.033 "adrfam": "IPv4", 00:11:31.033 "traddr": "10.0.0.3", 00:11:31.033 "trsvcid": "4420" 00:11:31.033 }, 00:11:31.033 "peer_address": { 00:11:31.033 "trtype": "TCP", 00:11:31.033 "adrfam": "IPv4", 00:11:31.033 "traddr": "10.0.0.1", 00:11:31.033 "trsvcid": "37944" 00:11:31.033 }, 00:11:31.033 "auth": { 00:11:31.033 "state": "completed", 00:11:31.033 "digest": "sha384", 00:11:31.033 "dhgroup": "ffdhe4096" 00:11:31.033 } 00:11:31.033 } 00:11:31.033 ]' 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.033 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.292 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.292 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.292 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.551 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:31.551 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.118 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.119 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.119 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.377 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.635 00:11:32.635 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.636 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.636 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.894 { 00:11:32.894 "cntlid": 79, 00:11:32.894 "qid": 0, 00:11:32.894 "state": "enabled", 00:11:32.894 "thread": "nvmf_tgt_poll_group_000", 00:11:32.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:32.894 "listen_address": { 00:11:32.894 "trtype": "TCP", 00:11:32.894 "adrfam": "IPv4", 00:11:32.894 "traddr": "10.0.0.3", 00:11:32.894 "trsvcid": "4420" 00:11:32.894 }, 00:11:32.894 "peer_address": { 00:11:32.894 "trtype": "TCP", 00:11:32.894 "adrfam": "IPv4", 00:11:32.894 "traddr": "10.0.0.1", 00:11:32.894 "trsvcid": "37972" 00:11:32.894 }, 00:11:32.894 "auth": { 00:11:32.894 "state": "completed", 00:11:32.894 "digest": "sha384", 00:11:32.894 "dhgroup": "ffdhe4096" 00:11:32.894 } 00:11:32.894 } 00:11:32.894 ]' 00:11:32.894 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.153 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.412 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:33.412 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:33.980 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.238 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.805 00:11:34.805 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.805 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.805 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.064 { 00:11:35.064 "cntlid": 81, 00:11:35.064 "qid": 0, 00:11:35.064 "state": "enabled", 00:11:35.064 "thread": "nvmf_tgt_poll_group_000", 00:11:35.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:35.064 "listen_address": { 00:11:35.064 "trtype": "TCP", 00:11:35.064 "adrfam": "IPv4", 00:11:35.064 "traddr": "10.0.0.3", 00:11:35.064 "trsvcid": "4420" 00:11:35.064 }, 00:11:35.064 "peer_address": { 00:11:35.064 "trtype": "TCP", 00:11:35.064 "adrfam": "IPv4", 00:11:35.064 "traddr": "10.0.0.1", 00:11:35.064 "trsvcid": "38016" 00:11:35.064 }, 00:11:35.064 "auth": { 00:11:35.064 "state": "completed", 00:11:35.064 "digest": "sha384", 00:11:35.064 "dhgroup": "ffdhe6144" 00:11:35.064 } 00:11:35.064 } 00:11:35.064 ]' 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.064 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.322 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.322 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.322 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.322 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.323 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.581 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:35.581 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.148 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.407 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.974 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.974 { 00:11:36.974 "cntlid": 83, 00:11:36.974 "qid": 0, 00:11:36.974 "state": "enabled", 00:11:36.974 "thread": "nvmf_tgt_poll_group_000", 00:11:36.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:36.974 "listen_address": { 00:11:36.974 "trtype": "TCP", 00:11:36.974 "adrfam": "IPv4", 00:11:36.974 "traddr": "10.0.0.3", 00:11:36.974 "trsvcid": "4420" 00:11:36.974 }, 00:11:36.974 "peer_address": { 00:11:36.974 "trtype": "TCP", 00:11:36.974 "adrfam": "IPv4", 00:11:36.974 "traddr": "10.0.0.1", 00:11:36.974 "trsvcid": "38036" 00:11:36.974 }, 00:11:36.974 "auth": { 00:11:36.974 "state": "completed", 00:11:36.974 "digest": "sha384", 00:11:36.974 "dhgroup": "ffdhe6144" 00:11:36.974 } 00:11:36.974 } 00:11:36.974 ]' 00:11:36.974 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.233 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.492 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:37.492 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.432 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.690 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.949 00:11:38.949 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.949 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.949 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.208 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.208 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.208 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.208 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.208 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.208 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.208 { 00:11:39.208 "cntlid": 85, 00:11:39.208 "qid": 0, 00:11:39.208 "state": "enabled", 00:11:39.208 "thread": "nvmf_tgt_poll_group_000", 00:11:39.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:39.208 "listen_address": { 00:11:39.208 "trtype": "TCP", 00:11:39.208 "adrfam": "IPv4", 00:11:39.208 "traddr": "10.0.0.3", 00:11:39.208 "trsvcid": "4420" 00:11:39.208 }, 00:11:39.208 "peer_address": { 00:11:39.208 "trtype": "TCP", 00:11:39.208 "adrfam": "IPv4", 00:11:39.208 "traddr": "10.0.0.1", 00:11:39.208 "trsvcid": "38056" 00:11:39.208 }, 00:11:39.208 "auth": { 00:11:39.208 "state": "completed", 00:11:39.208 "digest": "sha384", 00:11:39.208 "dhgroup": "ffdhe6144" 00:11:39.208 } 00:11:39.208 } 00:11:39.208 ]' 00:11:39.208 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.467 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.726 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:39.726 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.294 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.552 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:40.553 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.553 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.118 00:11:41.118 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.118 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.118 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.377 { 00:11:41.377 "cntlid": 87, 00:11:41.377 "qid": 0, 00:11:41.377 "state": "enabled", 00:11:41.377 "thread": "nvmf_tgt_poll_group_000", 00:11:41.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:41.377 "listen_address": { 00:11:41.377 "trtype": "TCP", 00:11:41.377 "adrfam": "IPv4", 00:11:41.377 "traddr": "10.0.0.3", 00:11:41.377 "trsvcid": "4420" 00:11:41.377 }, 00:11:41.377 "peer_address": { 00:11:41.377 "trtype": "TCP", 00:11:41.377 "adrfam": "IPv4", 00:11:41.377 "traddr": "10.0.0.1", 00:11:41.377 "trsvcid": "50842" 00:11:41.377 }, 00:11:41.377 "auth": { 00:11:41.377 "state": "completed", 00:11:41.377 "digest": "sha384", 00:11:41.377 "dhgroup": "ffdhe6144" 00:11:41.377 } 00:11:41.377 } 00:11:41.377 ]' 00:11:41.377 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.377 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.636 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:41.636 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.203 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.461 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.397 00:11:43.397 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.397 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.397 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.397 { 00:11:43.397 "cntlid": 89, 00:11:43.397 "qid": 0, 00:11:43.397 "state": "enabled", 00:11:43.397 "thread": "nvmf_tgt_poll_group_000", 00:11:43.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:43.397 "listen_address": { 00:11:43.397 "trtype": "TCP", 00:11:43.397 "adrfam": "IPv4", 00:11:43.397 "traddr": "10.0.0.3", 00:11:43.397 "trsvcid": "4420" 00:11:43.397 }, 00:11:43.397 "peer_address": { 00:11:43.397 "trtype": "TCP", 00:11:43.397 "adrfam": "IPv4", 00:11:43.397 "traddr": "10.0.0.1", 00:11:43.397 "trsvcid": "50874" 00:11:43.397 }, 00:11:43.397 "auth": { 00:11:43.397 "state": "completed", 00:11:43.397 "digest": "sha384", 00:11:43.397 "dhgroup": "ffdhe8192" 00:11:43.397 } 00:11:43.397 } 00:11:43.397 ]' 00:11:43.397 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.656 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.915 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:43.915 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.482 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.740 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.999 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.999 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.999 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.999 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.566 00:11:45.566 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.566 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.566 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.825 { 00:11:45.825 "cntlid": 91, 00:11:45.825 "qid": 0, 00:11:45.825 "state": "enabled", 00:11:45.825 "thread": "nvmf_tgt_poll_group_000", 00:11:45.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:45.825 "listen_address": { 00:11:45.825 "trtype": "TCP", 00:11:45.825 "adrfam": "IPv4", 00:11:45.825 "traddr": "10.0.0.3", 00:11:45.825 "trsvcid": "4420" 00:11:45.825 }, 00:11:45.825 "peer_address": { 00:11:45.825 "trtype": "TCP", 00:11:45.825 "adrfam": "IPv4", 00:11:45.825 "traddr": "10.0.0.1", 00:11:45.825 "trsvcid": "50922" 00:11:45.825 }, 00:11:45.825 "auth": { 00:11:45.825 "state": "completed", 00:11:45.825 "digest": "sha384", 00:11:45.825 "dhgroup": "ffdhe8192" 00:11:45.825 } 00:11:45.825 } 00:11:45.825 ]' 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.825 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.084 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:46.084 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.020 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.279 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.846 00:11:47.846 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.846 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.846 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.104 { 00:11:48.104 "cntlid": 93, 00:11:48.104 "qid": 0, 00:11:48.104 "state": "enabled", 00:11:48.104 "thread": "nvmf_tgt_poll_group_000", 00:11:48.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:48.104 "listen_address": { 00:11:48.104 "trtype": "TCP", 00:11:48.104 "adrfam": "IPv4", 00:11:48.104 "traddr": "10.0.0.3", 00:11:48.104 "trsvcid": "4420" 00:11:48.104 }, 00:11:48.104 "peer_address": { 00:11:48.104 "trtype": "TCP", 00:11:48.104 "adrfam": "IPv4", 00:11:48.104 "traddr": "10.0.0.1", 00:11:48.104 "trsvcid": "50936" 00:11:48.104 }, 00:11:48.104 "auth": { 00:11:48.104 "state": "completed", 00:11:48.104 "digest": "sha384", 00:11:48.104 "dhgroup": "ffdhe8192" 00:11:48.104 } 00:11:48.104 } 00:11:48.104 ]' 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.104 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.363 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.363 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.363 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.620 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:48.620 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:49.196 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:49.454 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.455 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.455 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.455 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.455 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.455 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.391 00:11:50.391 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.391 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.391 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.391 { 00:11:50.391 "cntlid": 95, 00:11:50.391 "qid": 0, 00:11:50.391 "state": "enabled", 00:11:50.391 "thread": "nvmf_tgt_poll_group_000", 00:11:50.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:50.391 "listen_address": { 00:11:50.391 "trtype": "TCP", 00:11:50.391 "adrfam": "IPv4", 00:11:50.391 "traddr": "10.0.0.3", 00:11:50.391 "trsvcid": "4420" 00:11:50.391 }, 00:11:50.391 "peer_address": { 00:11:50.391 "trtype": "TCP", 00:11:50.391 "adrfam": "IPv4", 00:11:50.391 "traddr": "10.0.0.1", 00:11:50.391 "trsvcid": "50962" 00:11:50.391 }, 00:11:50.391 "auth": { 00:11:50.391 "state": "completed", 00:11:50.391 "digest": "sha384", 00:11:50.391 "dhgroup": "ffdhe8192" 00:11:50.391 } 00:11:50.391 } 00:11:50.391 ]' 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.391 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.650 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.650 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.650 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.650 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.650 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.908 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:50.908 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.477 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.736 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.995 00:11:51.995 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.995 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.995 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.254 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.254 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.254 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.254 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.514 { 00:11:52.514 "cntlid": 97, 00:11:52.514 "qid": 0, 00:11:52.514 "state": "enabled", 00:11:52.514 "thread": "nvmf_tgt_poll_group_000", 00:11:52.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:52.514 "listen_address": { 00:11:52.514 "trtype": "TCP", 00:11:52.514 "adrfam": "IPv4", 00:11:52.514 "traddr": "10.0.0.3", 00:11:52.514 "trsvcid": "4420" 00:11:52.514 }, 00:11:52.514 "peer_address": { 00:11:52.514 "trtype": "TCP", 00:11:52.514 "adrfam": "IPv4", 00:11:52.514 "traddr": "10.0.0.1", 00:11:52.514 "trsvcid": "59438" 00:11:52.514 }, 00:11:52.514 "auth": { 00:11:52.514 "state": "completed", 00:11:52.514 "digest": "sha512", 00:11:52.514 "dhgroup": "null" 00:11:52.514 } 00:11:52.514 } 00:11:52.514 ]' 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.514 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.773 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:52.773 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.340 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.599 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.167 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.167 { 00:11:54.167 "cntlid": 99, 00:11:54.167 "qid": 0, 00:11:54.167 "state": "enabled", 00:11:54.167 "thread": "nvmf_tgt_poll_group_000", 00:11:54.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:54.167 "listen_address": { 00:11:54.167 "trtype": "TCP", 00:11:54.167 "adrfam": "IPv4", 00:11:54.167 "traddr": "10.0.0.3", 00:11:54.167 "trsvcid": "4420" 00:11:54.167 }, 00:11:54.167 "peer_address": { 00:11:54.167 "trtype": "TCP", 00:11:54.167 "adrfam": "IPv4", 00:11:54.167 "traddr": "10.0.0.1", 00:11:54.167 "trsvcid": "59466" 00:11:54.167 }, 00:11:54.167 "auth": { 00:11:54.167 "state": "completed", 00:11:54.167 "digest": "sha512", 00:11:54.167 "dhgroup": "null" 00:11:54.167 } 00:11:54.167 } 00:11:54.167 ]' 00:11:54.167 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:54.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.252 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.511 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.079 00:11:56.079 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.079 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.079 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.337 { 00:11:56.337 "cntlid": 101, 00:11:56.337 "qid": 0, 00:11:56.337 "state": "enabled", 00:11:56.337 "thread": "nvmf_tgt_poll_group_000", 00:11:56.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:56.337 "listen_address": { 00:11:56.337 "trtype": "TCP", 00:11:56.337 "adrfam": "IPv4", 00:11:56.337 "traddr": "10.0.0.3", 00:11:56.337 "trsvcid": "4420" 00:11:56.337 }, 00:11:56.337 "peer_address": { 00:11:56.337 "trtype": "TCP", 00:11:56.337 "adrfam": "IPv4", 00:11:56.337 "traddr": "10.0.0.1", 00:11:56.337 "trsvcid": "59504" 00:11:56.337 }, 00:11:56.337 "auth": { 00:11:56.337 "state": "completed", 00:11:56.337 "digest": "sha512", 00:11:56.337 "dhgroup": "null" 00:11:56.337 } 00:11:56.337 } 00:11:56.337 ]' 00:11:56.337 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.337 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.596 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:56.596 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:57.163 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.422 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.988 00:11:57.988 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.988 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.988 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.247 { 00:11:58.247 "cntlid": 103, 00:11:58.247 "qid": 0, 00:11:58.247 "state": "enabled", 00:11:58.247 "thread": "nvmf_tgt_poll_group_000", 00:11:58.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:11:58.247 "listen_address": { 00:11:58.247 "trtype": "TCP", 00:11:58.247 "adrfam": "IPv4", 00:11:58.247 "traddr": "10.0.0.3", 00:11:58.247 "trsvcid": "4420" 00:11:58.247 }, 00:11:58.247 "peer_address": { 00:11:58.247 "trtype": "TCP", 00:11:58.247 "adrfam": "IPv4", 00:11:58.247 "traddr": "10.0.0.1", 00:11:58.247 "trsvcid": "59536" 00:11:58.247 }, 00:11:58.247 "auth": { 00:11:58.247 "state": "completed", 00:11:58.247 "digest": "sha512", 00:11:58.247 "dhgroup": "null" 00:11:58.247 } 00:11:58.247 } 00:11:58.247 ]' 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:58.247 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.247 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.247 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.247 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.505 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:58.505 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.073 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.332 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.900 00:11:59.900 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.900 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.900 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.159 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.159 { 00:12:00.160 "cntlid": 105, 00:12:00.160 "qid": 0, 00:12:00.160 "state": "enabled", 00:12:00.160 "thread": "nvmf_tgt_poll_group_000", 00:12:00.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:00.160 "listen_address": { 00:12:00.160 "trtype": "TCP", 00:12:00.160 "adrfam": "IPv4", 00:12:00.160 "traddr": "10.0.0.3", 00:12:00.160 "trsvcid": "4420" 00:12:00.160 }, 00:12:00.160 "peer_address": { 00:12:00.160 "trtype": "TCP", 00:12:00.160 "adrfam": "IPv4", 00:12:00.160 "traddr": "10.0.0.1", 00:12:00.160 "trsvcid": "59564" 00:12:00.160 }, 00:12:00.160 "auth": { 00:12:00.160 "state": "completed", 00:12:00.160 "digest": "sha512", 00:12:00.160 "dhgroup": "ffdhe2048" 00:12:00.160 } 00:12:00.160 } 00:12:00.160 ]' 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.160 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.727 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:00.727 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:01.319 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.578 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.837 00:12:01.837 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.837 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.837 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.125 { 00:12:02.125 "cntlid": 107, 00:12:02.125 "qid": 0, 00:12:02.125 "state": "enabled", 00:12:02.125 "thread": "nvmf_tgt_poll_group_000", 00:12:02.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:02.125 "listen_address": { 00:12:02.125 "trtype": "TCP", 00:12:02.125 "adrfam": "IPv4", 00:12:02.125 "traddr": "10.0.0.3", 00:12:02.125 "trsvcid": "4420" 00:12:02.125 }, 00:12:02.125 "peer_address": { 00:12:02.125 "trtype": "TCP", 00:12:02.125 "adrfam": "IPv4", 00:12:02.125 "traddr": "10.0.0.1", 00:12:02.125 "trsvcid": "58932" 00:12:02.125 }, 00:12:02.125 "auth": { 00:12:02.125 "state": "completed", 00:12:02.125 "digest": "sha512", 00:12:02.125 "dhgroup": "ffdhe2048" 00:12:02.125 } 00:12:02.125 } 00:12:02.125 ]' 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:02.125 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.384 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.384 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.384 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.643 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:02.643 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:03.210 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.211 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.469 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:03.469 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.469 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.469 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.470 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.037 00:12:04.037 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.037 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.037 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.298 { 00:12:04.298 "cntlid": 109, 00:12:04.298 "qid": 0, 00:12:04.298 "state": "enabled", 00:12:04.298 "thread": "nvmf_tgt_poll_group_000", 00:12:04.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:04.298 "listen_address": { 00:12:04.298 "trtype": "TCP", 00:12:04.298 "adrfam": "IPv4", 00:12:04.298 "traddr": "10.0.0.3", 00:12:04.298 "trsvcid": "4420" 00:12:04.298 }, 00:12:04.298 "peer_address": { 00:12:04.298 "trtype": "TCP", 00:12:04.298 "adrfam": "IPv4", 00:12:04.298 "traddr": "10.0.0.1", 00:12:04.298 "trsvcid": "58972" 00:12:04.298 }, 00:12:04.298 "auth": { 00:12:04.298 "state": "completed", 00:12:04.298 "digest": "sha512", 00:12:04.298 "dhgroup": "ffdhe2048" 00:12:04.298 } 00:12:04.298 } 00:12:04.298 ]' 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.298 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.298 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.298 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.298 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.558 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:04.558 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:05.125 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:05.384 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.643 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.903 00:12:05.903 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.903 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.903 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.162 { 00:12:06.162 "cntlid": 111, 00:12:06.162 "qid": 0, 00:12:06.162 "state": "enabled", 00:12:06.162 "thread": "nvmf_tgt_poll_group_000", 00:12:06.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:06.162 "listen_address": { 00:12:06.162 "trtype": "TCP", 00:12:06.162 "adrfam": "IPv4", 00:12:06.162 "traddr": "10.0.0.3", 00:12:06.162 "trsvcid": "4420" 00:12:06.162 }, 00:12:06.162 "peer_address": { 00:12:06.162 "trtype": "TCP", 00:12:06.162 "adrfam": "IPv4", 00:12:06.162 "traddr": "10.0.0.1", 00:12:06.162 "trsvcid": "59008" 00:12:06.162 }, 00:12:06.162 "auth": { 00:12:06.162 "state": "completed", 00:12:06.162 "digest": "sha512", 00:12:06.162 "dhgroup": "ffdhe2048" 00:12:06.162 } 00:12:06.162 } 00:12:06.162 ]' 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.162 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.421 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.421 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.421 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.421 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.421 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.680 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:06.680 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:07.248 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.248 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:07.248 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.248 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.507 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.507 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.507 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.507 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.507 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.766 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.026 00:12:08.026 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.026 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.026 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.285 { 00:12:08.285 "cntlid": 113, 00:12:08.285 "qid": 0, 00:12:08.285 "state": "enabled", 00:12:08.285 "thread": "nvmf_tgt_poll_group_000", 00:12:08.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:08.285 "listen_address": { 00:12:08.285 "trtype": "TCP", 00:12:08.285 "adrfam": "IPv4", 00:12:08.285 "traddr": "10.0.0.3", 00:12:08.285 "trsvcid": "4420" 00:12:08.285 }, 00:12:08.285 "peer_address": { 00:12:08.285 "trtype": "TCP", 00:12:08.285 "adrfam": "IPv4", 00:12:08.285 "traddr": "10.0.0.1", 00:12:08.285 "trsvcid": "59030" 00:12:08.285 }, 00:12:08.285 "auth": { 00:12:08.285 "state": "completed", 00:12:08.285 "digest": "sha512", 00:12:08.285 "dhgroup": "ffdhe3072" 00:12:08.285 } 00:12:08.285 } 00:12:08.285 ]' 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.285 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.544 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.544 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.544 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.803 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:08.803 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.371 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.629 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.196 00:12:10.196 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.196 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.196 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.454 { 00:12:10.454 "cntlid": 115, 00:12:10.454 "qid": 0, 00:12:10.454 "state": "enabled", 00:12:10.454 "thread": "nvmf_tgt_poll_group_000", 00:12:10.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:10.454 "listen_address": { 00:12:10.454 "trtype": "TCP", 00:12:10.454 "adrfam": "IPv4", 00:12:10.454 "traddr": "10.0.0.3", 00:12:10.454 "trsvcid": "4420" 00:12:10.454 }, 00:12:10.454 "peer_address": { 00:12:10.454 "trtype": "TCP", 00:12:10.454 "adrfam": "IPv4", 00:12:10.454 "traddr": "10.0.0.1", 00:12:10.454 "trsvcid": "50168" 00:12:10.454 }, 00:12:10.454 "auth": { 00:12:10.454 "state": "completed", 00:12:10.454 "digest": "sha512", 00:12:10.454 "dhgroup": "ffdhe3072" 00:12:10.454 } 00:12:10.454 } 00:12:10.454 ]' 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.454 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.021 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:11.021 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.590 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.849 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.108 00:12:12.108 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.108 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.367 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.635 { 00:12:12.635 "cntlid": 117, 00:12:12.635 "qid": 0, 00:12:12.635 "state": "enabled", 00:12:12.635 "thread": "nvmf_tgt_poll_group_000", 00:12:12.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:12.635 "listen_address": { 00:12:12.635 "trtype": "TCP", 00:12:12.635 "adrfam": "IPv4", 00:12:12.635 "traddr": "10.0.0.3", 00:12:12.635 "trsvcid": "4420" 00:12:12.635 }, 00:12:12.635 "peer_address": { 00:12:12.635 "trtype": "TCP", 00:12:12.635 "adrfam": "IPv4", 00:12:12.635 "traddr": "10.0.0.1", 00:12:12.635 "trsvcid": "50184" 00:12:12.635 }, 00:12:12.635 "auth": { 00:12:12.635 "state": "completed", 00:12:12.635 "digest": "sha512", 00:12:12.635 "dhgroup": "ffdhe3072" 00:12:12.635 } 00:12:12.635 } 00:12:12.635 ]' 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.635 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.926 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:12.926 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:13.493 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:13.752 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.011 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.270 00:12:14.270 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.270 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.270 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.529 { 00:12:14.529 "cntlid": 119, 00:12:14.529 "qid": 0, 00:12:14.529 "state": "enabled", 00:12:14.529 "thread": "nvmf_tgt_poll_group_000", 00:12:14.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:14.529 "listen_address": { 00:12:14.529 "trtype": "TCP", 00:12:14.529 "adrfam": "IPv4", 00:12:14.529 "traddr": "10.0.0.3", 00:12:14.529 "trsvcid": "4420" 00:12:14.529 }, 00:12:14.529 "peer_address": { 00:12:14.529 "trtype": "TCP", 00:12:14.529 "adrfam": "IPv4", 00:12:14.529 "traddr": "10.0.0.1", 00:12:14.529 "trsvcid": "50218" 00:12:14.529 }, 00:12:14.529 "auth": { 00:12:14.529 "state": "completed", 00:12:14.529 "digest": "sha512", 00:12:14.529 "dhgroup": "ffdhe3072" 00:12:14.529 } 00:12:14.529 } 00:12:14.529 ]' 00:12:14.529 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.788 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.047 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:15.047 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.614 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.873 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.440 00:12:16.440 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.440 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.440 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.698 { 00:12:16.698 "cntlid": 121, 00:12:16.698 "qid": 0, 00:12:16.698 "state": "enabled", 00:12:16.698 "thread": "nvmf_tgt_poll_group_000", 00:12:16.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:16.698 "listen_address": { 00:12:16.698 "trtype": "TCP", 00:12:16.698 "adrfam": "IPv4", 00:12:16.698 "traddr": "10.0.0.3", 00:12:16.698 "trsvcid": "4420" 00:12:16.698 }, 00:12:16.698 "peer_address": { 00:12:16.698 "trtype": "TCP", 00:12:16.698 "adrfam": "IPv4", 00:12:16.698 "traddr": "10.0.0.1", 00:12:16.698 "trsvcid": "50232" 00:12:16.698 }, 00:12:16.698 "auth": { 00:12:16.698 "state": "completed", 00:12:16.698 "digest": "sha512", 00:12:16.698 "dhgroup": "ffdhe4096" 00:12:16.698 } 00:12:16.698 } 00:12:16.698 ]' 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.698 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.699 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.699 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.699 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.699 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.957 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:16.957 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.893 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.152 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.410 00:12:18.410 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.410 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.410 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.975 { 00:12:18.975 "cntlid": 123, 00:12:18.975 "qid": 0, 00:12:18.975 "state": "enabled", 00:12:18.975 "thread": "nvmf_tgt_poll_group_000", 00:12:18.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:18.975 "listen_address": { 00:12:18.975 "trtype": "TCP", 00:12:18.975 "adrfam": "IPv4", 00:12:18.975 "traddr": "10.0.0.3", 00:12:18.975 "trsvcid": "4420" 00:12:18.975 }, 00:12:18.975 "peer_address": { 00:12:18.975 "trtype": "TCP", 00:12:18.975 "adrfam": "IPv4", 00:12:18.975 "traddr": "10.0.0.1", 00:12:18.975 "trsvcid": "50258" 00:12:18.975 }, 00:12:18.975 "auth": { 00:12:18.975 "state": "completed", 00:12:18.975 "digest": "sha512", 00:12:18.975 "dhgroup": "ffdhe4096" 00:12:18.975 } 00:12:18.975 } 00:12:18.975 ]' 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.975 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.233 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:19.233 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.801 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.368 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.626 00:12:20.626 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.626 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.626 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.884 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.885 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.885 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.885 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.885 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.885 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.885 { 00:12:20.885 "cntlid": 125, 00:12:20.885 "qid": 0, 00:12:20.885 "state": "enabled", 00:12:20.885 "thread": "nvmf_tgt_poll_group_000", 00:12:20.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:20.885 "listen_address": { 00:12:20.885 "trtype": "TCP", 00:12:20.885 "adrfam": "IPv4", 00:12:20.885 "traddr": "10.0.0.3", 00:12:20.885 "trsvcid": "4420" 00:12:20.885 }, 00:12:20.885 "peer_address": { 00:12:20.885 "trtype": "TCP", 00:12:20.885 "adrfam": "IPv4", 00:12:20.885 "traddr": "10.0.0.1", 00:12:20.885 "trsvcid": "36594" 00:12:20.885 }, 00:12:20.885 "auth": { 00:12:20.885 "state": "completed", 00:12:20.885 "digest": "sha512", 00:12:20.885 "dhgroup": "ffdhe4096" 00:12:20.885 } 00:12:20.885 } 00:12:20.885 ]' 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.143 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.402 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:21.402 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.339 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.339 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.905 00:12:22.905 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.906 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.906 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.164 { 00:12:23.164 "cntlid": 127, 00:12:23.164 "qid": 0, 00:12:23.164 "state": "enabled", 00:12:23.164 "thread": "nvmf_tgt_poll_group_000", 00:12:23.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:23.164 "listen_address": { 00:12:23.164 "trtype": "TCP", 00:12:23.164 "adrfam": "IPv4", 00:12:23.164 "traddr": "10.0.0.3", 00:12:23.164 "trsvcid": "4420" 00:12:23.164 }, 00:12:23.164 "peer_address": { 00:12:23.164 "trtype": "TCP", 00:12:23.164 "adrfam": "IPv4", 00:12:23.164 "traddr": "10.0.0.1", 00:12:23.164 "trsvcid": "36622" 00:12:23.164 }, 00:12:23.164 "auth": { 00:12:23.164 "state": "completed", 00:12:23.164 "digest": "sha512", 00:12:23.164 "dhgroup": "ffdhe4096" 00:12:23.164 } 00:12:23.164 } 00:12:23.164 ]' 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.164 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.423 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.423 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.423 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.423 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:23.423 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.359 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.618 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.185 00:12:25.185 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.185 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.185 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.444 { 00:12:25.444 "cntlid": 129, 00:12:25.444 "qid": 0, 00:12:25.444 "state": "enabled", 00:12:25.444 "thread": "nvmf_tgt_poll_group_000", 00:12:25.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:25.444 "listen_address": { 00:12:25.444 "trtype": "TCP", 00:12:25.444 "adrfam": "IPv4", 00:12:25.444 "traddr": "10.0.0.3", 00:12:25.444 "trsvcid": "4420" 00:12:25.444 }, 00:12:25.444 "peer_address": { 00:12:25.444 "trtype": "TCP", 00:12:25.444 "adrfam": "IPv4", 00:12:25.444 "traddr": "10.0.0.1", 00:12:25.444 "trsvcid": "36642" 00:12:25.444 }, 00:12:25.444 "auth": { 00:12:25.444 "state": "completed", 00:12:25.444 "digest": "sha512", 00:12:25.444 "dhgroup": "ffdhe6144" 00:12:25.444 } 00:12:25.444 } 00:12:25.444 ]' 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.444 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.703 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:25.703 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:26.278 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.278 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:26.278 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.278 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:26.549 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.550 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.117 00:12:27.117 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.117 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.117 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.376 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.376 { 00:12:27.376 "cntlid": 131, 00:12:27.376 "qid": 0, 00:12:27.376 "state": "enabled", 00:12:27.376 "thread": "nvmf_tgt_poll_group_000", 00:12:27.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:27.377 "listen_address": { 00:12:27.377 "trtype": "TCP", 00:12:27.377 "adrfam": "IPv4", 00:12:27.377 "traddr": "10.0.0.3", 00:12:27.377 "trsvcid": "4420" 00:12:27.377 }, 00:12:27.377 "peer_address": { 00:12:27.377 "trtype": "TCP", 00:12:27.377 "adrfam": "IPv4", 00:12:27.377 "traddr": "10.0.0.1", 00:12:27.377 "trsvcid": "36658" 00:12:27.377 }, 00:12:27.377 "auth": { 00:12:27.377 "state": "completed", 00:12:27.377 "digest": "sha512", 00:12:27.377 "dhgroup": "ffdhe6144" 00:12:27.377 } 00:12:27.377 } 00:12:27.377 ]' 00:12:27.377 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.636 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.895 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:27.895 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.462 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.721 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.290 00:12:29.290 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.290 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.290 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.549 { 00:12:29.549 "cntlid": 133, 00:12:29.549 "qid": 0, 00:12:29.549 "state": "enabled", 00:12:29.549 "thread": "nvmf_tgt_poll_group_000", 00:12:29.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:29.549 "listen_address": { 00:12:29.549 "trtype": "TCP", 00:12:29.549 "adrfam": "IPv4", 00:12:29.549 "traddr": "10.0.0.3", 00:12:29.549 "trsvcid": "4420" 00:12:29.549 }, 00:12:29.549 "peer_address": { 00:12:29.549 "trtype": "TCP", 00:12:29.549 "adrfam": "IPv4", 00:12:29.549 "traddr": "10.0.0.1", 00:12:29.549 "trsvcid": "36672" 00:12:29.549 }, 00:12:29.549 "auth": { 00:12:29.549 "state": "completed", 00:12:29.549 "digest": "sha512", 00:12:29.549 "dhgroup": "ffdhe6144" 00:12:29.549 } 00:12:29.549 } 00:12:29.549 ]' 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.549 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.117 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:30.117 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.685 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.943 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:30.943 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.943 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.943 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:30.943 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.944 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.511 00:12:31.511 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.511 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.511 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.770 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.770 { 00:12:31.770 "cntlid": 135, 00:12:31.771 "qid": 0, 00:12:31.771 "state": "enabled", 00:12:31.771 "thread": "nvmf_tgt_poll_group_000", 00:12:31.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:31.771 "listen_address": { 00:12:31.771 "trtype": "TCP", 00:12:31.771 "adrfam": "IPv4", 00:12:31.771 "traddr": "10.0.0.3", 00:12:31.771 "trsvcid": "4420" 00:12:31.771 }, 00:12:31.771 "peer_address": { 00:12:31.771 "trtype": "TCP", 00:12:31.771 "adrfam": "IPv4", 00:12:31.771 "traddr": "10.0.0.1", 00:12:31.771 "trsvcid": "52902" 00:12:31.771 }, 00:12:31.771 "auth": { 00:12:31.771 "state": "completed", 00:12:31.771 "digest": "sha512", 00:12:31.771 "dhgroup": "ffdhe6144" 00:12:31.771 } 00:12:31.771 } 00:12:31.771 ]' 00:12:31.771 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.771 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.771 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.771 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.771 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.030 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.030 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.030 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.030 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:32.030 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.967 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.226 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.795 00:12:33.795 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.795 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.795 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.054 { 00:12:34.054 "cntlid": 137, 00:12:34.054 "qid": 0, 00:12:34.054 "state": "enabled", 00:12:34.054 "thread": "nvmf_tgt_poll_group_000", 00:12:34.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:34.054 "listen_address": { 00:12:34.054 "trtype": "TCP", 00:12:34.054 "adrfam": "IPv4", 00:12:34.054 "traddr": "10.0.0.3", 00:12:34.054 "trsvcid": "4420" 00:12:34.054 }, 00:12:34.054 "peer_address": { 00:12:34.054 "trtype": "TCP", 00:12:34.054 "adrfam": "IPv4", 00:12:34.054 "traddr": "10.0.0.1", 00:12:34.054 "trsvcid": "52946" 00:12:34.054 }, 00:12:34.054 "auth": { 00:12:34.054 "state": "completed", 00:12:34.054 "digest": "sha512", 00:12:34.054 "dhgroup": "ffdhe8192" 00:12:34.054 } 00:12:34.054 } 00:12:34.054 ]' 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.054 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.621 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:34.621 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.187 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.447 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.015 00:12:36.015 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.015 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.015 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.274 { 00:12:36.274 "cntlid": 139, 00:12:36.274 "qid": 0, 00:12:36.274 "state": "enabled", 00:12:36.274 "thread": "nvmf_tgt_poll_group_000", 00:12:36.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:36.274 "listen_address": { 00:12:36.274 "trtype": "TCP", 00:12:36.274 "adrfam": "IPv4", 00:12:36.274 "traddr": "10.0.0.3", 00:12:36.274 "trsvcid": "4420" 00:12:36.274 }, 00:12:36.274 "peer_address": { 00:12:36.274 "trtype": "TCP", 00:12:36.274 "adrfam": "IPv4", 00:12:36.274 "traddr": "10.0.0.1", 00:12:36.274 "trsvcid": "52956" 00:12:36.274 }, 00:12:36.274 "auth": { 00:12:36.274 "state": "completed", 00:12:36.274 "digest": "sha512", 00:12:36.274 "dhgroup": "ffdhe8192" 00:12:36.274 } 00:12:36.274 } 00:12:36.274 ]' 00:12:36.274 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.533 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.791 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:36.791 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: --dhchap-ctrl-secret DHHC-1:02:ZWNkNzE5M2I1MzZkNTYzODFjM2NjMjc2YmY0ZmFlOGExZGU5ZmQ2ZWIwMjQ0N2Y4mIovag==: 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.725 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.659 00:12:38.659 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.659 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.659 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.918 { 00:12:38.918 "cntlid": 141, 00:12:38.918 "qid": 0, 00:12:38.918 "state": "enabled", 00:12:38.918 "thread": "nvmf_tgt_poll_group_000", 00:12:38.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:38.918 "listen_address": { 00:12:38.918 "trtype": "TCP", 00:12:38.918 "adrfam": "IPv4", 00:12:38.918 "traddr": "10.0.0.3", 00:12:38.918 "trsvcid": "4420" 00:12:38.918 }, 00:12:38.918 "peer_address": { 00:12:38.918 "trtype": "TCP", 00:12:38.918 "adrfam": "IPv4", 00:12:38.918 "traddr": "10.0.0.1", 00:12:38.918 "trsvcid": "52990" 00:12:38.918 }, 00:12:38.918 "auth": { 00:12:38.918 "state": "completed", 00:12:38.918 "digest": "sha512", 00:12:38.918 "dhgroup": "ffdhe8192" 00:12:38.918 } 00:12:38.918 } 00:12:38.918 ]' 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.918 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.486 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:39.486 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:01:NmQyODkwMzc5MjU4OWI2NWVmODYwNjU2YjRmNTI0YTQ7bP4l: 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:40.057 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.316 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.883 00:12:41.142 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.142 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.142 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.142 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.401 { 00:12:41.401 "cntlid": 143, 00:12:41.401 "qid": 0, 00:12:41.401 "state": "enabled", 00:12:41.401 "thread": "nvmf_tgt_poll_group_000", 00:12:41.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:41.401 "listen_address": { 00:12:41.401 "trtype": "TCP", 00:12:41.401 "adrfam": "IPv4", 00:12:41.401 "traddr": "10.0.0.3", 00:12:41.401 "trsvcid": "4420" 00:12:41.401 }, 00:12:41.401 "peer_address": { 00:12:41.401 "trtype": "TCP", 00:12:41.401 "adrfam": "IPv4", 00:12:41.401 "traddr": "10.0.0.1", 00:12:41.401 "trsvcid": "53138" 00:12:41.401 }, 00:12:41.401 "auth": { 00:12:41.401 "state": "completed", 00:12:41.401 "digest": "sha512", 00:12:41.401 "dhgroup": "ffdhe8192" 00:12:41.401 } 00:12:41.401 } 00:12:41.401 ]' 00:12:41.401 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.401 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.660 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:41.660 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.596 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.855 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.423 00:12:43.423 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.423 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.423 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.683 { 00:12:43.683 "cntlid": 145, 00:12:43.683 "qid": 0, 00:12:43.683 "state": "enabled", 00:12:43.683 "thread": "nvmf_tgt_poll_group_000", 00:12:43.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:43.683 "listen_address": { 00:12:43.683 "trtype": "TCP", 00:12:43.683 "adrfam": "IPv4", 00:12:43.683 "traddr": "10.0.0.3", 00:12:43.683 "trsvcid": "4420" 00:12:43.683 }, 00:12:43.683 "peer_address": { 00:12:43.683 "trtype": "TCP", 00:12:43.683 "adrfam": "IPv4", 00:12:43.683 "traddr": "10.0.0.1", 00:12:43.683 "trsvcid": "53166" 00:12:43.683 }, 00:12:43.683 "auth": { 00:12:43.683 "state": "completed", 00:12:43.683 "digest": "sha512", 00:12:43.683 "dhgroup": "ffdhe8192" 00:12:43.683 } 00:12:43.683 } 00:12:43.683 ]' 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.683 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.942 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.942 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.942 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.201 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:44.201 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:00:ODdkNTkyMzljOTZjMjgxZGQxNTZiZDkzOTdlYTIwYTcwZTdiYjdjMTgxYzVlYmVl41ebzw==: --dhchap-ctrl-secret DHHC-1:03:OGVlMzZkOTViZGE4NDQ0ODk2MGE3NjI1ZDZjYjM0MWI0NDljZDUzNDgwZGIwNzIxMzA5YjNkYTYyMmQzOTJkZv0HE4A=: 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:44.769 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:45.337 request: 00:12:45.337 { 00:12:45.337 "name": "nvme0", 00:12:45.337 "trtype": "tcp", 00:12:45.337 "traddr": "10.0.0.3", 00:12:45.337 "adrfam": "ipv4", 00:12:45.337 "trsvcid": "4420", 00:12:45.337 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:45.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:45.337 "prchk_reftag": false, 00:12:45.337 "prchk_guard": false, 00:12:45.337 "hdgst": false, 00:12:45.337 "ddgst": false, 00:12:45.337 "dhchap_key": "key2", 00:12:45.337 "allow_unrecognized_csi": false, 00:12:45.337 "method": "bdev_nvme_attach_controller", 00:12:45.337 "req_id": 1 00:12:45.337 } 00:12:45.337 Got JSON-RPC error response 00:12:45.337 response: 00:12:45.337 { 00:12:45.337 "code": -5, 00:12:45.337 "message": "Input/output error" 00:12:45.337 } 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:45.337 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.338 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.338 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.338 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.906 request: 00:12:45.906 { 00:12:45.906 "name": "nvme0", 00:12:45.906 "trtype": "tcp", 00:12:45.906 "traddr": "10.0.0.3", 00:12:45.906 "adrfam": "ipv4", 00:12:45.906 "trsvcid": "4420", 00:12:45.906 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:45.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:45.906 "prchk_reftag": false, 00:12:45.906 "prchk_guard": false, 00:12:45.906 "hdgst": false, 00:12:45.906 "ddgst": false, 00:12:45.906 "dhchap_key": "key1", 00:12:45.906 "dhchap_ctrlr_key": "ckey2", 00:12:45.906 "allow_unrecognized_csi": false, 00:12:45.906 "method": "bdev_nvme_attach_controller", 00:12:45.906 "req_id": 1 00:12:45.906 } 00:12:45.906 Got JSON-RPC error response 00:12:45.906 response: 00:12:45.906 { 00:12:45.906 "code": -5, 00:12:45.906 "message": "Input/output error" 00:12:45.906 } 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.843 request: 00:12:46.843 { 00:12:46.843 "name": "nvme0", 00:12:46.843 "trtype": "tcp", 00:12:46.843 "traddr": "10.0.0.3", 00:12:46.843 "adrfam": "ipv4", 00:12:46.843 "trsvcid": "4420", 00:12:46.843 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:46.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:46.843 "prchk_reftag": false, 00:12:46.843 "prchk_guard": false, 00:12:46.843 "hdgst": false, 00:12:46.843 "ddgst": false, 00:12:46.843 "dhchap_key": "key1", 00:12:46.843 "dhchap_ctrlr_key": "ckey1", 00:12:46.843 "allow_unrecognized_csi": false, 00:12:46.843 "method": "bdev_nvme_attach_controller", 00:12:46.843 "req_id": 1 00:12:46.843 } 00:12:46.843 Got JSON-RPC error response 00:12:46.843 response: 00:12:46.843 { 00:12:46.843 "code": -5, 00:12:46.843 "message": "Input/output error" 00:12:46.843 } 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 68028 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68028 ']' 00:12:46.843 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68028 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68028 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.844 killing process with pid 68028 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68028' 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68028 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68028 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=71092 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 71092 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71092 ']' 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.844 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 71092 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71092 ']' 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.103 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.670 null0 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.12R 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wUS ]] 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wUS 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:47.670 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.odY 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.WSz ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WSz 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.X20 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.HLa ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HLa 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4at 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.671 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.605 nvme0n1 00:12:48.605 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.605 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.605 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.172 { 00:12:49.172 "cntlid": 1, 00:12:49.172 "qid": 0, 00:12:49.172 "state": "enabled", 00:12:49.172 "thread": "nvmf_tgt_poll_group_000", 00:12:49.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:49.172 "listen_address": { 00:12:49.172 "trtype": "TCP", 00:12:49.172 "adrfam": "IPv4", 00:12:49.172 "traddr": "10.0.0.3", 00:12:49.172 "trsvcid": "4420" 00:12:49.172 }, 00:12:49.172 "peer_address": { 00:12:49.172 "trtype": "TCP", 00:12:49.172 "adrfam": "IPv4", 00:12:49.172 "traddr": "10.0.0.1", 00:12:49.172 "trsvcid": "53228" 00:12:49.172 }, 00:12:49.172 "auth": { 00:12:49.172 "state": "completed", 00:12:49.172 "digest": "sha512", 00:12:49.172 "dhgroup": "ffdhe8192" 00:12:49.172 } 00:12:49.172 } 00:12:49.172 ]' 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.172 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.432 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:49.432 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key3 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:50.369 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.629 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.912 request: 00:12:50.912 { 00:12:50.912 "name": "nvme0", 00:12:50.912 "trtype": "tcp", 00:12:50.912 "traddr": "10.0.0.3", 00:12:50.912 "adrfam": "ipv4", 00:12:50.912 "trsvcid": "4420", 00:12:50.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:50.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:50.912 "prchk_reftag": false, 00:12:50.912 "prchk_guard": false, 00:12:50.912 "hdgst": false, 00:12:50.912 "ddgst": false, 00:12:50.912 "dhchap_key": "key3", 00:12:50.912 "allow_unrecognized_csi": false, 00:12:50.912 "method": "bdev_nvme_attach_controller", 00:12:50.912 "req_id": 1 00:12:50.912 } 00:12:50.912 Got JSON-RPC error response 00:12:50.912 response: 00:12:50.912 { 00:12:50.912 "code": -5, 00:12:50.912 "message": "Input/output error" 00:12:50.912 } 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:50.912 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.179 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.439 request: 00:12:51.439 { 00:12:51.439 "name": "nvme0", 00:12:51.439 "trtype": "tcp", 00:12:51.439 "traddr": "10.0.0.3", 00:12:51.439 "adrfam": "ipv4", 00:12:51.439 "trsvcid": "4420", 00:12:51.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:51.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:51.439 "prchk_reftag": false, 00:12:51.439 "prchk_guard": false, 00:12:51.439 "hdgst": false, 00:12:51.439 "ddgst": false, 00:12:51.439 "dhchap_key": "key3", 00:12:51.439 "allow_unrecognized_csi": false, 00:12:51.439 "method": "bdev_nvme_attach_controller", 00:12:51.439 "req_id": 1 00:12:51.439 } 00:12:51.439 Got JSON-RPC error response 00:12:51.439 response: 00:12:51.439 { 00:12:51.439 "code": -5, 00:12:51.439 "message": "Input/output error" 00:12:51.439 } 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:51.439 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.698 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:52.266 request: 00:12:52.266 { 00:12:52.266 "name": "nvme0", 00:12:52.266 "trtype": "tcp", 00:12:52.266 "traddr": "10.0.0.3", 00:12:52.266 "adrfam": "ipv4", 00:12:52.266 "trsvcid": "4420", 00:12:52.266 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:52.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:52.266 "prchk_reftag": false, 00:12:52.266 "prchk_guard": false, 00:12:52.266 "hdgst": false, 00:12:52.266 "ddgst": false, 00:12:52.266 "dhchap_key": "key0", 00:12:52.266 "dhchap_ctrlr_key": "key1", 00:12:52.266 "allow_unrecognized_csi": false, 00:12:52.266 "method": "bdev_nvme_attach_controller", 00:12:52.266 "req_id": 1 00:12:52.266 } 00:12:52.266 Got JSON-RPC error response 00:12:52.266 response: 00:12:52.266 { 00:12:52.266 "code": -5, 00:12:52.266 "message": "Input/output error" 00:12:52.266 } 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:52.266 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:52.525 nvme0n1 00:12:52.525 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:52.525 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:52.525 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.784 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.784 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.784 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:53.043 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:54.420 nvme0n1 00:12:54.420 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:54.420 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:54.420 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.420 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:54.679 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.679 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:54.679 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid 2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -l 0 --dhchap-secret DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: --dhchap-ctrl-secret DHHC-1:03:ZDY1M2YwMTI4ZWVmMThiYWI3ZDc0YmE2YTNjODgyM2Y0NDBmZmU5NjNkODQyNDBmZjc4OWM5MzJiNmQwZGFmOMfptx4=: 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:55.616 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:56.184 request: 00:12:56.184 { 00:12:56.184 "name": "nvme0", 00:12:56.184 "trtype": "tcp", 00:12:56.184 "traddr": "10.0.0.3", 00:12:56.184 "adrfam": "ipv4", 00:12:56.184 "trsvcid": "4420", 00:12:56.184 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:56.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892", 00:12:56.184 "prchk_reftag": false, 00:12:56.184 "prchk_guard": false, 00:12:56.184 "hdgst": false, 00:12:56.184 "ddgst": false, 00:12:56.184 "dhchap_key": "key1", 00:12:56.184 "allow_unrecognized_csi": false, 00:12:56.184 "method": "bdev_nvme_attach_controller", 00:12:56.184 "req_id": 1 00:12:56.184 } 00:12:56.184 Got JSON-RPC error response 00:12:56.184 response: 00:12:56.184 { 00:12:56.184 "code": -5, 00:12:56.184 "message": "Input/output error" 00:12:56.184 } 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:56.184 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:57.120 nvme0n1 00:12:57.379 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:57.379 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.379 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:57.638 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.638 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.638 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:57.897 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:58.157 nvme0n1 00:12:58.157 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:58.157 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.157 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:58.416 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.416 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.416 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: '' 2s 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: ]] 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmU5Mzk2ZjYwNWM1NDIwNzYxMzhmMDNiODRhZGRhODMJnTSk: 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:58.675 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: 2s 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: ]] 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTNiNGVmM2QyYjNkNDA1Y2EzN2E0MDAzNzBmYzgwNTkwNzFmZWNiZDRmNjJhYTBiR7Xyyg==: 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:01.209 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:03.154 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:03.722 nvme0n1 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:03.722 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:04.290 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:04.290 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:04.290 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.548 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.548 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:13:04.548 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.548 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.549 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.549 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:04.549 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:04.807 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:04.807 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:04.807 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:05.067 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:05.634 request: 00:13:05.634 { 00:13:05.634 "name": "nvme0", 00:13:05.634 "dhchap_key": "key1", 00:13:05.634 "dhchap_ctrlr_key": "key3", 00:13:05.634 "method": "bdev_nvme_set_keys", 00:13:05.634 "req_id": 1 00:13:05.634 } 00:13:05.634 Got JSON-RPC error response 00:13:05.634 response: 00:13:05.634 { 00:13:05.634 "code": -13, 00:13:05.634 "message": "Permission denied" 00:13:05.634 } 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.634 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:05.893 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:05.893 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:07.270 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:07.270 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:07.270 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:07.270 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:08.207 nvme0n1 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:08.207 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.208 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:08.208 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.208 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:08.208 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:08.775 request: 00:13:08.776 { 00:13:08.776 "name": "nvme0", 00:13:08.776 "dhchap_key": "key2", 00:13:08.776 "dhchap_ctrlr_key": "key0", 00:13:08.776 "method": "bdev_nvme_set_keys", 00:13:08.776 "req_id": 1 00:13:08.776 } 00:13:08.776 Got JSON-RPC error response 00:13:08.776 response: 00:13:08.776 { 00:13:08.776 "code": -13, 00:13:08.776 "message": "Permission denied" 00:13:08.776 } 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:08.776 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.034 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:09.034 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:10.412 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:10.412 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:10.412 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68047 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68047 ']' 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68047 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68047 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:10.412 killing process with pid 68047 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68047' 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68047 00:13:10.412 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68047 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.670 rmmod nvme_tcp 00:13:10.670 rmmod nvme_fabrics 00:13:10.670 rmmod nvme_keyring 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 71092 ']' 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 71092 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71092 ']' 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71092 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.670 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71092 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.929 killing process with pid 71092 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71092' 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71092 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71092 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:10.929 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.12R /tmp/spdk.key-sha256.odY /tmp/spdk.key-sha384.X20 /tmp/spdk.key-sha512.4at /tmp/spdk.key-sha512.wUS /tmp/spdk.key-sha384.WSz /tmp/spdk.key-sha256.HLa '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:11.187 00:13:11.187 real 3m8.615s 00:13:11.187 user 7m34.323s 00:13:11.187 sys 0m27.749s 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.187 ************************************ 00:13:11.187 END TEST nvmf_auth_target 00:13:11.187 ************************************ 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.187 ************************************ 00:13:11.187 START TEST nvmf_bdevio_no_huge 00:13:11.187 ************************************ 00:13:11.187 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:11.446 * Looking for test storage... 00:13:11.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.446 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.447 --rc genhtml_branch_coverage=1 00:13:11.447 --rc genhtml_function_coverage=1 00:13:11.447 --rc genhtml_legend=1 00:13:11.447 --rc geninfo_all_blocks=1 00:13:11.447 --rc geninfo_unexecuted_blocks=1 00:13:11.447 00:13:11.447 ' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.447 --rc genhtml_branch_coverage=1 00:13:11.447 --rc genhtml_function_coverage=1 00:13:11.447 --rc genhtml_legend=1 00:13:11.447 --rc geninfo_all_blocks=1 00:13:11.447 --rc geninfo_unexecuted_blocks=1 00:13:11.447 00:13:11.447 ' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.447 --rc genhtml_branch_coverage=1 00:13:11.447 --rc genhtml_function_coverage=1 00:13:11.447 --rc genhtml_legend=1 00:13:11.447 --rc geninfo_all_blocks=1 00:13:11.447 --rc geninfo_unexecuted_blocks=1 00:13:11.447 00:13:11.447 ' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.447 --rc genhtml_branch_coverage=1 00:13:11.447 --rc genhtml_function_coverage=1 00:13:11.447 --rc genhtml_legend=1 00:13:11.447 --rc geninfo_all_blocks=1 00:13:11.447 --rc geninfo_unexecuted_blocks=1 00:13:11.447 00:13:11.447 ' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.447 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:11.448 Cannot find device "nvmf_init_br" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:11.448 Cannot find device "nvmf_init_br2" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:11.448 Cannot find device "nvmf_tgt_br" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.448 Cannot find device "nvmf_tgt_br2" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:11.448 Cannot find device "nvmf_init_br" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:11.448 Cannot find device "nvmf_init_br2" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:11.448 Cannot find device "nvmf_tgt_br" 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:11.448 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:11.707 Cannot find device "nvmf_tgt_br2" 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:11.707 Cannot find device "nvmf_br" 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:11.707 Cannot find device "nvmf_init_if" 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:11.707 Cannot find device "nvmf_init_if2" 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:11.707 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:11.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:11.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:11.967 00:13:11.967 --- 10.0.0.3 ping statistics --- 00:13:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.967 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:11.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:11.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:13:11.967 00:13:11.967 --- 10.0.0.4 ping statistics --- 00:13:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.967 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:11.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:11.967 00:13:11.967 --- 10.0.0.1 ping statistics --- 00:13:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.967 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:11.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:11.967 00:13:11.967 --- 10.0.0.2 ping statistics --- 00:13:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.967 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71720 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71720 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71720 ']' 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.967 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.967 [2024-12-10 14:18:36.691551] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:11.967 [2024-12-10 14:18:36.691650] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:12.226 [2024-12-10 14:18:36.856069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.226 [2024-12-10 14:18:36.928910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.226 [2024-12-10 14:18:36.928977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.226 [2024-12-10 14:18:36.929002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.226 [2024-12-10 14:18:36.929012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.226 [2024-12-10 14:18:36.929020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.226 [2024-12-10 14:18:36.929866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:12.226 [2024-12-10 14:18:36.930007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:12.226 [2024-12-10 14:18:36.930156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:12.226 [2024-12-10 14:18:36.930162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.226 [2024-12-10 14:18:36.936239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 [2024-12-10 14:18:37.778672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 Malloc0 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 [2024-12-10 14:18:37.817189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:13.164 { 00:13:13.164 "params": { 00:13:13.164 "name": "Nvme$subsystem", 00:13:13.164 "trtype": "$TEST_TRANSPORT", 00:13:13.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.164 "adrfam": "ipv4", 00:13:13.164 "trsvcid": "$NVMF_PORT", 00:13:13.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.164 "hdgst": ${hdgst:-false}, 00:13:13.164 "ddgst": ${ddgst:-false} 00:13:13.164 }, 00:13:13.164 "method": "bdev_nvme_attach_controller" 00:13:13.164 } 00:13:13.164 EOF 00:13:13.164 )") 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:13.164 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:13.164 "params": { 00:13:13.164 "name": "Nvme1", 00:13:13.164 "trtype": "tcp", 00:13:13.164 "traddr": "10.0.0.3", 00:13:13.164 "adrfam": "ipv4", 00:13:13.164 "trsvcid": "4420", 00:13:13.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.164 "hdgst": false, 00:13:13.164 "ddgst": false 00:13:13.164 }, 00:13:13.164 "method": "bdev_nvme_attach_controller" 00:13:13.164 }' 00:13:13.164 [2024-12-10 14:18:37.875530] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:13.164 [2024-12-10 14:18:37.875642] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71760 ] 00:13:13.423 [2024-12-10 14:18:38.034726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.423 [2024-12-10 14:18:38.091367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.423 [2024-12-10 14:18:38.091504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.423 [2024-12-10 14:18:38.091756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.423 [2024-12-10 14:18:38.104591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:13.682 I/O targets: 00:13:13.682 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:13.682 00:13:13.682 00:13:13.682 CUnit - A unit testing framework for C - Version 2.1-3 00:13:13.682 http://cunit.sourceforge.net/ 00:13:13.682 00:13:13.682 00:13:13.682 Suite: bdevio tests on: Nvme1n1 00:13:13.682 Test: blockdev write read block ...passed 00:13:13.682 Test: blockdev write zeroes read block ...passed 00:13:13.682 Test: blockdev write zeroes read no split ...passed 00:13:13.682 Test: blockdev write zeroes read split ...passed 00:13:13.682 Test: blockdev write zeroes read split partial ...passed 00:13:13.682 Test: blockdev reset ...[2024-12-10 14:18:38.309612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:13.682 [2024-12-10 14:18:38.309727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14e90 (9): Bad file descriptor 00:13:13.682 [2024-12-10 14:18:38.329611] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:13.682 passed 00:13:13.683 Test: blockdev write read 8 blocks ...passed 00:13:13.683 Test: blockdev write read size > 128k ...passed 00:13:13.683 Test: blockdev write read invalid size ...passed 00:13:13.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.683 Test: blockdev write read max offset ...passed 00:13:13.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.683 Test: blockdev writev readv 8 blocks ...passed 00:13:13.683 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.683 Test: blockdev writev readv block ...passed 00:13:13.683 Test: blockdev writev readv size > 128k ...passed 00:13:13.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.683 Test: blockdev comparev and writev ...[2024-12-10 14:18:38.339213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.339284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.339569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.339604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.339865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.339898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.339908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.340214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.340238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.340256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:13.683 [2024-12-10 14:18:38.340266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:13.683 passed 00:13:13.683 Test: blockdev nvme passthru rw ...passed 00:13:13.683 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:18:38.341558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:13.683 [2024-12-10 14:18:38.341588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.341712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:13.683 [2024-12-10 14:18:38.341735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.341863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:13.683 [2024-12-10 14:18:38.341884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:13.683 [2024-12-10 14:18:38.342010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:13.683 [2024-12-10 14:18:38.342028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:13.683 passed 00:13:13.683 Test: blockdev nvme admin passthru ...passed 00:13:13.683 Test: blockdev copy ...passed 00:13:13.683 00:13:13.683 Run Summary: Type Total Ran Passed Failed Inactive 00:13:13.683 suites 1 1 n/a 0 0 00:13:13.683 tests 23 23 23 0 0 00:13:13.683 asserts 152 152 152 0 n/a 00:13:13.683 00:13:13.683 Elapsed time = 0.177 seconds 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.942 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.942 rmmod nvme_tcp 00:13:13.942 rmmod nvme_fabrics 00:13:13.942 rmmod nvme_keyring 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71720 ']' 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71720 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71720 ']' 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71720 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71720 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:14.201 killing process with pid 71720 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71720' 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71720 00:13:14.201 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71720 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:14.499 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:14.758 00:13:14.758 real 0m3.472s 00:13:14.758 user 0m10.332s 00:13:14.758 sys 0m1.299s 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.758 ************************************ 00:13:14.758 END TEST nvmf_bdevio_no_huge 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.758 ************************************ 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.758 ************************************ 00:13:14.758 START TEST nvmf_tls 00:13:14.758 ************************************ 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:14.758 * Looking for test storage... 00:13:14.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.758 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.018 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.019 --rc genhtml_branch_coverage=1 00:13:15.019 --rc genhtml_function_coverage=1 00:13:15.019 --rc genhtml_legend=1 00:13:15.019 --rc geninfo_all_blocks=1 00:13:15.019 --rc geninfo_unexecuted_blocks=1 00:13:15.019 00:13:15.019 ' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.019 --rc genhtml_branch_coverage=1 00:13:15.019 --rc genhtml_function_coverage=1 00:13:15.019 --rc genhtml_legend=1 00:13:15.019 --rc geninfo_all_blocks=1 00:13:15.019 --rc geninfo_unexecuted_blocks=1 00:13:15.019 00:13:15.019 ' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.019 --rc genhtml_branch_coverage=1 00:13:15.019 --rc genhtml_function_coverage=1 00:13:15.019 --rc genhtml_legend=1 00:13:15.019 --rc geninfo_all_blocks=1 00:13:15.019 --rc geninfo_unexecuted_blocks=1 00:13:15.019 00:13:15.019 ' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.019 --rc genhtml_branch_coverage=1 00:13:15.019 --rc genhtml_function_coverage=1 00:13:15.019 --rc genhtml_legend=1 00:13:15.019 --rc geninfo_all_blocks=1 00:13:15.019 --rc geninfo_unexecuted_blocks=1 00:13:15.019 00:13:15.019 ' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.019 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:15.019 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:15.020 Cannot find device "nvmf_init_br" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:15.020 Cannot find device "nvmf_init_br2" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:15.020 Cannot find device "nvmf_tgt_br" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.020 Cannot find device "nvmf_tgt_br2" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:15.020 Cannot find device "nvmf_init_br" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:15.020 Cannot find device "nvmf_init_br2" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:15.020 Cannot find device "nvmf_tgt_br" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:15.020 Cannot find device "nvmf_tgt_br2" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:15.020 Cannot find device "nvmf_br" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:15.020 Cannot find device "nvmf_init_if" 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:15.020 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:15.279 Cannot find device "nvmf_init_if2" 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:15.279 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:15.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:15.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:13:15.279 00:13:15.279 --- 10.0.0.3 ping statistics --- 00:13:15.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.279 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:15.279 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:15.279 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:15.279 00:13:15.279 --- 10.0.0.4 ping statistics --- 00:13:15.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.279 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:15.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:15.279 00:13:15.279 --- 10.0.0.1 ping statistics --- 00:13:15.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.279 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:15.279 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:15.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:15.538 00:13:15.538 --- 10.0.0.2 ping statistics --- 00:13:15.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.538 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72000 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72000 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72000 ']' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.538 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.538 [2024-12-10 14:18:40.203637] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:15.539 [2024-12-10 14:18:40.203751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.539 [2024-12-10 14:18:40.358837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.798 [2024-12-10 14:18:40.397512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.798 [2024-12-10 14:18:40.397577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.798 [2024-12-10 14:18:40.397600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.798 [2024-12-10 14:18:40.397610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.798 [2024-12-10 14:18:40.397618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.798 [2024-12-10 14:18:40.398027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:15.798 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:16.057 true 00:13:16.057 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:16.057 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:16.316 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:16.316 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:16.316 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:16.575 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:16.575 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:16.834 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:16.834 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:16.834 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:17.093 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:17.093 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:17.353 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:17.353 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:17.353 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:17.353 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:17.612 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:17.612 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:17.612 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:17.871 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:17.871 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:18.130 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:18.130 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:18.131 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:18.389 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:18.389 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pJZPmYXBtt 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.tiq5lJV5wl 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pJZPmYXBtt 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.tiq5lJV5wl 00:13:18.649 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:18.908 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:19.167 [2024-12-10 14:18:43.916644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:19.167 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pJZPmYXBtt 00:13:19.167 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pJZPmYXBtt 00:13:19.167 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:19.426 [2024-12-10 14:18:44.214942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.426 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:19.685 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:19.944 [2024-12-10 14:18:44.675082] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:19.944 [2024-12-10 14:18:44.675367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:19.944 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:20.203 malloc0 00:13:20.203 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:20.464 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pJZPmYXBtt 00:13:20.723 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:20.981 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pJZPmYXBtt 00:13:33.192 Initializing NVMe Controllers 00:13:33.192 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.192 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:33.192 Initialization complete. Launching workers. 00:13:33.192 ======================================================== 00:13:33.192 Latency(us) 00:13:33.192 Device Information : IOPS MiB/s Average min max 00:13:33.192 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10351.76 40.44 6183.99 937.40 9464.21 00:13:33.192 ======================================================== 00:13:33.192 Total : 10351.76 40.44 6183.99 937.40 9464.21 00:13:33.192 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pJZPmYXBtt 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pJZPmYXBtt 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72225 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72225 /var/tmp/bdevperf.sock 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72225 ']' 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.192 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.192 [2024-12-10 14:18:55.973386] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:33.192 [2024-12-10 14:18:55.973487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72225 ] 00:13:33.192 [2024-12-10 14:18:56.124090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.192 [2024-12-10 14:18:56.163771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.192 [2024-12-10 14:18:56.198671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:33.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pJZPmYXBtt 00:13:33.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:33.192 [2024-12-10 14:18:56.735538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.192 TLSTESTn1 00:13:33.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:33.192 Running I/O for 10 seconds... 00:13:34.128 4329.00 IOPS, 16.91 MiB/s [2024-12-10T14:19:00.343Z] 4379.00 IOPS, 17.11 MiB/s [2024-12-10T14:19:01.281Z] 4394.67 IOPS, 17.17 MiB/s [2024-12-10T14:19:02.219Z] 4397.75 IOPS, 17.18 MiB/s [2024-12-10T14:19:03.156Z] 4405.60 IOPS, 17.21 MiB/s [2024-12-10T14:19:04.103Z] 4415.83 IOPS, 17.25 MiB/s [2024-12-10T14:19:05.055Z] 4420.71 IOPS, 17.27 MiB/s [2024-12-10T14:19:05.993Z] 4426.38 IOPS, 17.29 MiB/s [2024-12-10T14:19:07.370Z] 4435.33 IOPS, 17.33 MiB/s [2024-12-10T14:19:07.370Z] 4441.90 IOPS, 17.35 MiB/s 00:13:42.533 Latency(us) 00:13:42.533 [2024-12-10T14:19:07.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.533 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:42.533 Verification LBA range: start 0x0 length 0x2000 00:13:42.533 TLSTESTn1 : 10.01 4448.22 17.38 0.00 0.00 28726.11 4915.20 25141.99 00:13:42.533 [2024-12-10T14:19:07.370Z] =================================================================================================================== 00:13:42.533 [2024-12-10T14:19:07.370Z] Total : 4448.22 17.38 0.00 0.00 28726.11 4915.20 25141.99 00:13:42.533 { 00:13:42.533 "results": [ 00:13:42.533 { 00:13:42.533 "job": "TLSTESTn1", 00:13:42.533 "core_mask": "0x4", 00:13:42.533 "workload": "verify", 00:13:42.533 "status": "finished", 00:13:42.533 "verify_range": { 00:13:42.533 "start": 0, 00:13:42.533 "length": 8192 00:13:42.533 }, 00:13:42.533 "queue_depth": 128, 00:13:42.533 "io_size": 4096, 00:13:42.533 "runtime": 10.014354, 00:13:42.533 "iops": 4448.215032142863, 00:13:42.533 "mibps": 17.375839969308057, 00:13:42.533 "io_failed": 0, 00:13:42.533 "io_timeout": 0, 00:13:42.533 "avg_latency_us": 28726.106661551083, 00:13:42.533 "min_latency_us": 4915.2, 00:13:42.533 "max_latency_us": 25141.992727272725 00:13:42.533 } 00:13:42.533 ], 00:13:42.533 "core_count": 1 00:13:42.533 } 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72225 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72225 ']' 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72225 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.533 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72225 00:13:42.533 killing process with pid 72225 00:13:42.533 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.533 00:13:42.533 Latency(us) 00:13:42.533 [2024-12-10T14:19:07.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.533 [2024-12-10T14:19:07.370Z] =================================================================================================================== 00:13:42.533 [2024-12-10T14:19:07.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72225' 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72225 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72225 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tiq5lJV5wl 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tiq5lJV5wl 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tiq5lJV5wl 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tiq5lJV5wl 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72348 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72348 /var/tmp/bdevperf.sock 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72348 ']' 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.534 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.534 [2024-12-10 14:19:07.199429] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:42.534 [2024-12-10 14:19:07.199546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72348 ] 00:13:42.534 [2024-12-10 14:19:07.343031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.810 [2024-12-10 14:19:07.376232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.810 [2024-12-10 14:19:07.404553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.810 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.810 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:42.810 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tiq5lJV5wl 00:13:43.069 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:43.328 [2024-12-10 14:19:08.007665] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:43.328 [2024-12-10 14:19:08.018477] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:43.328 [2024-12-10 14:19:08.018552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99f030 (107): Transport endpoint is not connected 00:13:43.328 [2024-12-10 14:19:08.019544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99f030 (9): Bad file descriptor 00:13:43.328 [2024-12-10 14:19:08.020541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:43.328 [2024-12-10 14:19:08.020563] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:43.328 [2024-12-10 14:19:08.020588] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:43.328 [2024-12-10 14:19:08.020602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:43.328 request: 00:13:43.328 { 00:13:43.328 "name": "TLSTEST", 00:13:43.328 "trtype": "tcp", 00:13:43.328 "traddr": "10.0.0.3", 00:13:43.328 "adrfam": "ipv4", 00:13:43.328 "trsvcid": "4420", 00:13:43.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:43.328 "prchk_reftag": false, 00:13:43.328 "prchk_guard": false, 00:13:43.328 "hdgst": false, 00:13:43.328 "ddgst": false, 00:13:43.328 "psk": "key0", 00:13:43.328 "allow_unrecognized_csi": false, 00:13:43.328 "method": "bdev_nvme_attach_controller", 00:13:43.328 "req_id": 1 00:13:43.328 } 00:13:43.328 Got JSON-RPC error response 00:13:43.328 response: 00:13:43.328 { 00:13:43.328 "code": -5, 00:13:43.328 "message": "Input/output error" 00:13:43.328 } 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72348 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72348 ']' 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72348 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72348 00:13:43.328 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:43.328 killing process with pid 72348 00:13:43.328 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.328 00:13:43.328 Latency(us) 00:13:43.328 [2024-12-10T14:19:08.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.328 [2024-12-10T14:19:08.165Z] =================================================================================================================== 00:13:43.328 [2024-12-10T14:19:08.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:43.329 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:43.329 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72348' 00:13:43.329 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72348 00:13:43.329 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72348 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pJZPmYXBtt 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pJZPmYXBtt 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pJZPmYXBtt 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pJZPmYXBtt 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72369 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72369 /var/tmp/bdevperf.sock 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72369 ']' 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.588 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.588 [2024-12-10 14:19:08.255101] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:43.588 [2024-12-10 14:19:08.255421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72369 ] 00:13:43.588 [2024-12-10 14:19:08.396368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.847 [2024-12-10 14:19:08.430642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.847 [2024-12-10 14:19:08.460398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.847 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.847 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:43.847 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pJZPmYXBtt 00:13:44.106 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:44.365 [2024-12-10 14:19:08.956459] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.365 [2024-12-10 14:19:08.961145] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:44.365 [2024-12-10 14:19:08.961182] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:44.365 [2024-12-10 14:19:08.961244] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:44.365 [2024-12-10 14:19:08.961920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde8030 (107): Transport endpoint is not connected 00:13:44.365 [2024-12-10 14:19:08.962891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde8030 (9): Bad file descriptor 00:13:44.365 [2024-12-10 14:19:08.963888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:44.365 [2024-12-10 14:19:08.964100] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:44.365 [2024-12-10 14:19:08.964118] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:44.365 [2024-12-10 14:19:08.964136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:44.365 request: 00:13:44.365 { 00:13:44.365 "name": "TLSTEST", 00:13:44.365 "trtype": "tcp", 00:13:44.365 "traddr": "10.0.0.3", 00:13:44.365 "adrfam": "ipv4", 00:13:44.365 "trsvcid": "4420", 00:13:44.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.365 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:44.365 "prchk_reftag": false, 00:13:44.365 "prchk_guard": false, 00:13:44.365 "hdgst": false, 00:13:44.365 "ddgst": false, 00:13:44.365 "psk": "key0", 00:13:44.365 "allow_unrecognized_csi": false, 00:13:44.365 "method": "bdev_nvme_attach_controller", 00:13:44.365 "req_id": 1 00:13:44.365 } 00:13:44.365 Got JSON-RPC error response 00:13:44.365 response: 00:13:44.365 { 00:13:44.365 "code": -5, 00:13:44.365 "message": "Input/output error" 00:13:44.365 } 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72369 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72369 ']' 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72369 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.365 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72369 00:13:44.365 killing process with pid 72369 00:13:44.365 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.365 00:13:44.365 Latency(us) 00:13:44.365 [2024-12-10T14:19:09.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.365 [2024-12-10T14:19:09.202Z] =================================================================================================================== 00:13:44.365 [2024-12-10T14:19:09.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.365 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:44.365 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:44.365 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72369' 00:13:44.365 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72369 00:13:44.365 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72369 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pJZPmYXBtt 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pJZPmYXBtt 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pJZPmYXBtt 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pJZPmYXBtt 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72390 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72390 /var/tmp/bdevperf.sock 00:13:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72390 ']' 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.366 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.366 [2024-12-10 14:19:09.191440] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:44.366 [2024-12-10 14:19:09.191692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72390 ] 00:13:44.625 [2024-12-10 14:19:09.334375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.625 [2024-12-10 14:19:09.364483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.625 [2024-12-10 14:19:09.393349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.625 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.625 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:44.625 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pJZPmYXBtt 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:45.193 [2024-12-10 14:19:09.944455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:45.193 [2024-12-10 14:19:09.949497] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:45.193 [2024-12-10 14:19:09.949733] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:45.193 [2024-12-10 14:19:09.949917] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:45.193 [2024-12-10 14:19:09.950278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc030 (107): Transport endpoint is not connected 00:13:45.193 [2024-12-10 14:19:09.951268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc030 (9): Bad file descriptor 00:13:45.193 [2024-12-10 14:19:09.952265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:45.193 [2024-12-10 14:19:09.952470] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:45.193 [2024-12-10 14:19:09.952501] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:45.193 [2024-12-10 14:19:09.952518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:45.193 request: 00:13:45.193 { 00:13:45.193 "name": "TLSTEST", 00:13:45.193 "trtype": "tcp", 00:13:45.193 "traddr": "10.0.0.3", 00:13:45.193 "adrfam": "ipv4", 00:13:45.193 "trsvcid": "4420", 00:13:45.193 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:45.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.193 "prchk_reftag": false, 00:13:45.193 "prchk_guard": false, 00:13:45.193 "hdgst": false, 00:13:45.193 "ddgst": false, 00:13:45.193 "psk": "key0", 00:13:45.193 "allow_unrecognized_csi": false, 00:13:45.193 "method": "bdev_nvme_attach_controller", 00:13:45.193 "req_id": 1 00:13:45.193 } 00:13:45.193 Got JSON-RPC error response 00:13:45.193 response: 00:13:45.193 { 00:13:45.193 "code": -5, 00:13:45.193 "message": "Input/output error" 00:13:45.193 } 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72390 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72390 ']' 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72390 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.193 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72390 00:13:45.193 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:45.193 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:45.193 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72390' 00:13:45.193 killing process with pid 72390 00:13:45.193 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72390 00:13:45.193 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.193 00:13:45.193 Latency(us) 00:13:45.193 [2024-12-10T14:19:10.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.193 [2024-12-10T14:19:10.030Z] =================================================================================================================== 00:13:45.193 [2024-12-10T14:19:10.030Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.193 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72390 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72411 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72411 /var/tmp/bdevperf.sock 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72411 ']' 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.452 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.452 [2024-12-10 14:19:10.197390] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:45.452 [2024-12-10 14:19:10.197761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72411 ] 00:13:45.711 [2024-12-10 14:19:10.346383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.711 [2024-12-10 14:19:10.377574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.711 [2024-12-10 14:19:10.406411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.649 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.649 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:46.649 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:46.649 [2024-12-10 14:19:11.345030] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:46.649 [2024-12-10 14:19:11.345078] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:46.649 request: 00:13:46.649 { 00:13:46.649 "name": "key0", 00:13:46.649 "path": "", 00:13:46.649 "method": "keyring_file_add_key", 00:13:46.649 "req_id": 1 00:13:46.649 } 00:13:46.649 Got JSON-RPC error response 00:13:46.649 response: 00:13:46.649 { 00:13:46.649 "code": -1, 00:13:46.649 "message": "Operation not permitted" 00:13:46.649 } 00:13:46.649 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.909 [2024-12-10 14:19:11.561228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:46.909 [2024-12-10 14:19:11.561313] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:46.909 request: 00:13:46.909 { 00:13:46.909 "name": "TLSTEST", 00:13:46.909 "trtype": "tcp", 00:13:46.909 "traddr": "10.0.0.3", 00:13:46.909 "adrfam": "ipv4", 00:13:46.909 "trsvcid": "4420", 00:13:46.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.909 "prchk_reftag": false, 00:13:46.909 "prchk_guard": false, 00:13:46.909 "hdgst": false, 00:13:46.909 "ddgst": false, 00:13:46.909 "psk": "key0", 00:13:46.909 "allow_unrecognized_csi": false, 00:13:46.909 "method": "bdev_nvme_attach_controller", 00:13:46.909 "req_id": 1 00:13:46.909 } 00:13:46.909 Got JSON-RPC error response 00:13:46.909 response: 00:13:46.909 { 00:13:46.909 "code": -126, 00:13:46.909 "message": "Required key not available" 00:13:46.909 } 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72411 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72411 ']' 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72411 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72411 00:13:46.909 killing process with pid 72411 00:13:46.909 Received shutdown signal, test time was about 10.000000 seconds 00:13:46.909 00:13:46.909 Latency(us) 00:13:46.909 [2024-12-10T14:19:11.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.909 [2024-12-10T14:19:11.746Z] =================================================================================================================== 00:13:46.909 [2024-12-10T14:19:11.746Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72411' 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72411 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72411 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 72000 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72000 ']' 00:13:46.909 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72000 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72000 00:13:47.169 killing process with pid 72000 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72000' 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72000 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72000 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.7sN9koHjlU 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.7sN9koHjlU 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72455 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72455 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72455 ']' 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.169 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.170 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.170 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.170 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.429 [2024-12-10 14:19:12.029811] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:47.429 [2024-12-10 14:19:12.029899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.429 [2024-12-10 14:19:12.168715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.429 [2024-12-10 14:19:12.196378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.429 [2024-12-10 14:19:12.196446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.429 [2024-12-10 14:19:12.196473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.429 [2024-12-10 14:19:12.196480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.429 [2024-12-10 14:19:12.196486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.429 [2024-12-10 14:19:12.196764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.429 [2024-12-10 14:19:12.224977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.688 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.688 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:47.688 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.688 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.688 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.689 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.689 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:13:47.689 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7sN9koHjlU 00:13:47.689 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:47.689 [2024-12-10 14:19:12.519741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.948 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:47.948 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:48.515 [2024-12-10 14:19:13.063837] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.515 [2024-12-10 14:19:13.064332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:48.515 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:48.515 malloc0 00:13:48.515 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:48.774 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:13:49.033 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7sN9koHjlU 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7sN9koHjlU 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72502 00:13:49.292 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72502 /var/tmp/bdevperf.sock 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72502 ']' 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.293 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.293 [2024-12-10 14:19:14.091114] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:13:49.293 [2024-12-10 14:19:14.091354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72502 ] 00:13:49.551 [2024-12-10 14:19:14.242330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.551 [2024-12-10 14:19:14.281703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.551 [2024-12-10 14:19:14.314849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:50.513 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.513 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:50.513 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:13:50.771 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:50.771 [2024-12-10 14:19:15.557038] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.030 TLSTESTn1 00:13:51.030 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:51.030 Running I/O for 10 seconds... 00:13:53.344 4251.00 IOPS, 16.61 MiB/s [2024-12-10T14:19:19.119Z] 4339.00 IOPS, 16.95 MiB/s [2024-12-10T14:19:20.056Z] 4372.67 IOPS, 17.08 MiB/s [2024-12-10T14:19:20.991Z] 4390.00 IOPS, 17.15 MiB/s [2024-12-10T14:19:21.926Z] 4399.00 IOPS, 17.18 MiB/s [2024-12-10T14:19:22.864Z] 4403.33 IOPS, 17.20 MiB/s [2024-12-10T14:19:23.800Z] 4401.00 IOPS, 17.19 MiB/s [2024-12-10T14:19:25.181Z] 4396.75 IOPS, 17.17 MiB/s [2024-12-10T14:19:26.121Z] 4399.89 IOPS, 17.19 MiB/s [2024-12-10T14:19:26.121Z] 4396.60 IOPS, 17.17 MiB/s 00:14:01.284 Latency(us) 00:14:01.284 [2024-12-10T14:19:26.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.284 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:01.284 Verification LBA range: start 0x0 length 0x2000 00:14:01.284 TLSTESTn1 : 10.02 4402.40 17.20 0.00 0.00 29024.37 5510.98 22520.55 00:14:01.284 [2024-12-10T14:19:26.121Z] =================================================================================================================== 00:14:01.284 [2024-12-10T14:19:26.121Z] Total : 4402.40 17.20 0.00 0.00 29024.37 5510.98 22520.55 00:14:01.284 { 00:14:01.284 "results": [ 00:14:01.284 { 00:14:01.284 "job": "TLSTESTn1", 00:14:01.284 "core_mask": "0x4", 00:14:01.284 "workload": "verify", 00:14:01.284 "status": "finished", 00:14:01.284 "verify_range": { 00:14:01.284 "start": 0, 00:14:01.284 "length": 8192 00:14:01.284 }, 00:14:01.284 "queue_depth": 128, 00:14:01.284 "io_size": 4096, 00:14:01.284 "runtime": 10.01523, 00:14:01.284 "iops": 4402.395152183225, 00:14:01.284 "mibps": 17.196856063215723, 00:14:01.284 "io_failed": 0, 00:14:01.284 "io_timeout": 0, 00:14:01.284 "avg_latency_us": 29024.36979255713, 00:14:01.284 "min_latency_us": 5510.981818181818, 00:14:01.284 "max_latency_us": 22520.552727272727 00:14:01.284 } 00:14:01.284 ], 00:14:01.284 "core_count": 1 00:14:01.285 } 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72502 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72502 ']' 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72502 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72502 00:14:01.285 killing process with pid 72502 00:14:01.285 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.285 00:14:01.285 Latency(us) 00:14:01.285 [2024-12-10T14:19:26.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.285 [2024-12-10T14:19:26.122Z] =================================================================================================================== 00:14:01.285 [2024-12-10T14:19:26.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72502' 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72502 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72502 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.7sN9koHjlU 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7sN9koHjlU 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7sN9koHjlU 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7sN9koHjlU 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7sN9koHjlU 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72639 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72639 /var/tmp/bdevperf.sock 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72639 ']' 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.285 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.285 [2024-12-10 14:19:26.042361] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:01.285 [2024-12-10 14:19:26.043383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72639 ] 00:14:01.545 [2024-12-10 14:19:26.182127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.545 [2024-12-10 14:19:26.211614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.545 [2024-12-10 14:19:26.238559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.545 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.545 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:01.545 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:01.804 [2024-12-10 14:19:26.552178] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7sN9koHjlU': 0100666 00:14:01.804 [2024-12-10 14:19:26.552218] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:01.804 request: 00:14:01.804 { 00:14:01.804 "name": "key0", 00:14:01.804 "path": "/tmp/tmp.7sN9koHjlU", 00:14:01.804 "method": "keyring_file_add_key", 00:14:01.804 "req_id": 1 00:14:01.804 } 00:14:01.804 Got JSON-RPC error response 00:14:01.804 response: 00:14:01.804 { 00:14:01.804 "code": -1, 00:14:01.804 "message": "Operation not permitted" 00:14:01.804 } 00:14:01.804 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.063 [2024-12-10 14:19:26.784341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.063 [2024-12-10 14:19:26.784441] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:02.063 request: 00:14:02.063 { 00:14:02.063 "name": "TLSTEST", 00:14:02.063 "trtype": "tcp", 00:14:02.063 "traddr": "10.0.0.3", 00:14:02.063 "adrfam": "ipv4", 00:14:02.063 "trsvcid": "4420", 00:14:02.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.063 "prchk_reftag": false, 00:14:02.063 "prchk_guard": false, 00:14:02.063 "hdgst": false, 00:14:02.063 "ddgst": false, 00:14:02.063 "psk": "key0", 00:14:02.063 "allow_unrecognized_csi": false, 00:14:02.063 "method": "bdev_nvme_attach_controller", 00:14:02.063 "req_id": 1 00:14:02.063 } 00:14:02.063 Got JSON-RPC error response 00:14:02.063 response: 00:14:02.063 { 00:14:02.063 "code": -126, 00:14:02.063 "message": "Required key not available" 00:14:02.063 } 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72639 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72639 ']' 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72639 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.063 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72639 00:14:02.063 killing process with pid 72639 00:14:02.063 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.063 00:14:02.063 Latency(us) 00:14:02.063 [2024-12-10T14:19:26.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.063 [2024-12-10T14:19:26.900Z] =================================================================================================================== 00:14:02.063 [2024-12-10T14:19:26.901Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.064 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:02.064 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:02.064 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72639' 00:14:02.064 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72639 00:14:02.064 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72639 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72455 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72455 ']' 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72455 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72455 00:14:02.323 killing process with pid 72455 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72455' 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72455 00:14:02.323 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72455 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72665 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72665 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72665 ']' 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.323 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.582 [2024-12-10 14:19:27.203437] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:02.582 [2024-12-10 14:19:27.203738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.582 [2024-12-10 14:19:27.351113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.582 [2024-12-10 14:19:27.378238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.582 [2024-12-10 14:19:27.378522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.582 [2024-12-10 14:19:27.378557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.582 [2024-12-10 14:19:27.378565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.582 [2024-12-10 14:19:27.378571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.582 [2024-12-10 14:19:27.378853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.582 [2024-12-10 14:19:27.406030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7sN9koHjlU 00:14:02.841 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:03.100 [2024-12-10 14:19:27.761101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.100 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:03.360 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:03.619 [2024-12-10 14:19:28.229225] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:03.619 [2024-12-10 14:19:28.229464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:03.619 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:03.878 malloc0 00:14:03.878 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:04.137 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:04.396 [2024-12-10 14:19:29.055182] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7sN9koHjlU': 0100666 00:14:04.396 [2024-12-10 14:19:29.055472] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:04.396 request: 00:14:04.396 { 00:14:04.396 "name": "key0", 00:14:04.396 "path": "/tmp/tmp.7sN9koHjlU", 00:14:04.396 "method": "keyring_file_add_key", 00:14:04.396 "req_id": 1 00:14:04.396 } 00:14:04.396 Got JSON-RPC error response 00:14:04.396 response: 00:14:04.396 { 00:14:04.396 "code": -1, 00:14:04.396 "message": "Operation not permitted" 00:14:04.396 } 00:14:04.397 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:04.656 [2024-12-10 14:19:29.331279] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:04.656 [2024-12-10 14:19:29.331614] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:04.656 request: 00:14:04.656 { 00:14:04.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.656 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.656 "psk": "key0", 00:14:04.656 "method": "nvmf_subsystem_add_host", 00:14:04.656 "req_id": 1 00:14:04.656 } 00:14:04.656 Got JSON-RPC error response 00:14:04.656 response: 00:14:04.656 { 00:14:04.656 "code": -32603, 00:14:04.656 "message": "Internal error" 00:14:04.656 } 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72665 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72665 ']' 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72665 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72665 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:04.656 killing process with pid 72665 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72665' 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72665 00:14:04.656 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72665 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.7sN9koHjlU 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72721 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72721 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72721 ']' 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.916 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.916 [2024-12-10 14:19:29.595370] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:04.916 [2024-12-10 14:19:29.595464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.916 [2024-12-10 14:19:29.742697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.175 [2024-12-10 14:19:29.773670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.175 [2024-12-10 14:19:29.773726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.175 [2024-12-10 14:19:29.773751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.175 [2024-12-10 14:19:29.773758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.175 [2024-12-10 14:19:29.773764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.175 [2024-12-10 14:19:29.774079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.175 [2024-12-10 14:19:29.801184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7sN9koHjlU 00:14:05.742 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:06.001 [2024-12-10 14:19:30.775734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.001 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:06.260 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:06.519 [2024-12-10 14:19:31.339888] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.519 [2024-12-10 14:19:31.340355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.778 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:06.778 malloc0 00:14:06.778 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:07.344 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:07.344 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:07.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72782 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72782 /var/tmp/bdevperf.sock 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72782 ']' 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.603 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.603 [2024-12-10 14:19:32.395474] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:07.603 [2024-12-10 14:19:32.395806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72782 ] 00:14:07.862 [2024-12-10 14:19:32.534548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.862 [2024-12-10 14:19:32.564921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.862 [2024-12-10 14:19:32.593047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.862 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.862 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.862 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:08.121 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.380 [2024-12-10 14:19:33.075719] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.381 TLSTESTn1 00:14:08.381 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:08.950 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:08.950 "subsystems": [ 00:14:08.950 { 00:14:08.950 "subsystem": "keyring", 00:14:08.950 "config": [ 00:14:08.950 { 00:14:08.950 "method": "keyring_file_add_key", 00:14:08.950 "params": { 00:14:08.950 "name": "key0", 00:14:08.950 "path": "/tmp/tmp.7sN9koHjlU" 00:14:08.950 } 00:14:08.950 } 00:14:08.950 ] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "iobuf", 00:14:08.950 "config": [ 00:14:08.950 { 00:14:08.950 "method": "iobuf_set_options", 00:14:08.950 "params": { 00:14:08.950 "small_pool_count": 8192, 00:14:08.950 "large_pool_count": 1024, 00:14:08.950 "small_bufsize": 8192, 00:14:08.950 "large_bufsize": 135168, 00:14:08.950 "enable_numa": false 00:14:08.950 } 00:14:08.950 } 00:14:08.950 ] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "sock", 00:14:08.950 "config": [ 00:14:08.950 { 00:14:08.950 "method": "sock_set_default_impl", 00:14:08.950 "params": { 00:14:08.950 "impl_name": "uring" 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "sock_impl_set_options", 00:14:08.950 "params": { 00:14:08.950 "impl_name": "ssl", 00:14:08.950 "recv_buf_size": 4096, 00:14:08.950 "send_buf_size": 4096, 00:14:08.950 "enable_recv_pipe": true, 00:14:08.950 "enable_quickack": false, 00:14:08.950 "enable_placement_id": 0, 00:14:08.950 "enable_zerocopy_send_server": true, 00:14:08.950 "enable_zerocopy_send_client": false, 00:14:08.950 "zerocopy_threshold": 0, 00:14:08.950 "tls_version": 0, 00:14:08.950 "enable_ktls": false 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "sock_impl_set_options", 00:14:08.950 "params": { 00:14:08.950 "impl_name": "posix", 00:14:08.950 "recv_buf_size": 2097152, 00:14:08.950 "send_buf_size": 2097152, 00:14:08.950 "enable_recv_pipe": true, 00:14:08.950 "enable_quickack": false, 00:14:08.950 "enable_placement_id": 0, 00:14:08.950 "enable_zerocopy_send_server": true, 00:14:08.950 "enable_zerocopy_send_client": false, 00:14:08.950 "zerocopy_threshold": 0, 00:14:08.950 "tls_version": 0, 00:14:08.950 "enable_ktls": false 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "sock_impl_set_options", 00:14:08.950 "params": { 00:14:08.950 "impl_name": "uring", 00:14:08.950 "recv_buf_size": 2097152, 00:14:08.950 "send_buf_size": 2097152, 00:14:08.950 "enable_recv_pipe": true, 00:14:08.950 "enable_quickack": false, 00:14:08.950 "enable_placement_id": 0, 00:14:08.950 "enable_zerocopy_send_server": false, 00:14:08.950 "enable_zerocopy_send_client": false, 00:14:08.950 "zerocopy_threshold": 0, 00:14:08.950 "tls_version": 0, 00:14:08.950 "enable_ktls": false 00:14:08.950 } 00:14:08.950 } 00:14:08.950 ] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "vmd", 00:14:08.950 "config": [] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "accel", 00:14:08.950 "config": [ 00:14:08.950 { 00:14:08.950 "method": "accel_set_options", 00:14:08.950 "params": { 00:14:08.950 "small_cache_size": 128, 00:14:08.950 "large_cache_size": 16, 00:14:08.950 "task_count": 2048, 00:14:08.950 "sequence_count": 2048, 00:14:08.950 "buf_count": 2048 00:14:08.950 } 00:14:08.950 } 00:14:08.950 ] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "bdev", 00:14:08.950 "config": [ 00:14:08.950 { 00:14:08.950 "method": "bdev_set_options", 00:14:08.950 "params": { 00:14:08.950 "bdev_io_pool_size": 65535, 00:14:08.950 "bdev_io_cache_size": 256, 00:14:08.950 "bdev_auto_examine": true, 00:14:08.950 "iobuf_small_cache_size": 128, 00:14:08.950 "iobuf_large_cache_size": 16 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_raid_set_options", 00:14:08.950 "params": { 00:14:08.950 "process_window_size_kb": 1024, 00:14:08.950 "process_max_bandwidth_mb_sec": 0 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_iscsi_set_options", 00:14:08.950 "params": { 00:14:08.950 "timeout_sec": 30 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_nvme_set_options", 00:14:08.950 "params": { 00:14:08.950 "action_on_timeout": "none", 00:14:08.950 "timeout_us": 0, 00:14:08.950 "timeout_admin_us": 0, 00:14:08.950 "keep_alive_timeout_ms": 10000, 00:14:08.950 "arbitration_burst": 0, 00:14:08.950 "low_priority_weight": 0, 00:14:08.950 "medium_priority_weight": 0, 00:14:08.950 "high_priority_weight": 0, 00:14:08.950 "nvme_adminq_poll_period_us": 10000, 00:14:08.950 "nvme_ioq_poll_period_us": 0, 00:14:08.950 "io_queue_requests": 0, 00:14:08.950 "delay_cmd_submit": true, 00:14:08.950 "transport_retry_count": 4, 00:14:08.950 "bdev_retry_count": 3, 00:14:08.950 "transport_ack_timeout": 0, 00:14:08.950 "ctrlr_loss_timeout_sec": 0, 00:14:08.950 "reconnect_delay_sec": 0, 00:14:08.950 "fast_io_fail_timeout_sec": 0, 00:14:08.950 "disable_auto_failback": false, 00:14:08.950 "generate_uuids": false, 00:14:08.950 "transport_tos": 0, 00:14:08.950 "nvme_error_stat": false, 00:14:08.950 "rdma_srq_size": 0, 00:14:08.950 "io_path_stat": false, 00:14:08.950 "allow_accel_sequence": false, 00:14:08.950 "rdma_max_cq_size": 0, 00:14:08.950 "rdma_cm_event_timeout_ms": 0, 00:14:08.950 "dhchap_digests": [ 00:14:08.950 "sha256", 00:14:08.950 "sha384", 00:14:08.950 "sha512" 00:14:08.950 ], 00:14:08.950 "dhchap_dhgroups": [ 00:14:08.950 "null", 00:14:08.950 "ffdhe2048", 00:14:08.950 "ffdhe3072", 00:14:08.950 "ffdhe4096", 00:14:08.950 "ffdhe6144", 00:14:08.950 "ffdhe8192" 00:14:08.950 ] 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_nvme_set_hotplug", 00:14:08.950 "params": { 00:14:08.950 "period_us": 100000, 00:14:08.950 "enable": false 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_malloc_create", 00:14:08.950 "params": { 00:14:08.950 "name": "malloc0", 00:14:08.950 "num_blocks": 8192, 00:14:08.950 "block_size": 4096, 00:14:08.950 "physical_block_size": 4096, 00:14:08.950 "uuid": "09ea1315-8d21-40bb-a425-4eac1371b167", 00:14:08.950 "optimal_io_boundary": 0, 00:14:08.950 "md_size": 0, 00:14:08.950 "dif_type": 0, 00:14:08.950 "dif_is_head_of_md": false, 00:14:08.950 "dif_pi_format": 0 00:14:08.950 } 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "method": "bdev_wait_for_examine" 00:14:08.950 } 00:14:08.950 ] 00:14:08.950 }, 00:14:08.950 { 00:14:08.950 "subsystem": "nbd", 00:14:08.951 "config": [] 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "subsystem": "scheduler", 00:14:08.951 "config": [ 00:14:08.951 { 00:14:08.951 "method": "framework_set_scheduler", 00:14:08.951 "params": { 00:14:08.951 "name": "static" 00:14:08.951 } 00:14:08.951 } 00:14:08.951 ] 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "subsystem": "nvmf", 00:14:08.951 "config": [ 00:14:08.951 { 00:14:08.951 "method": "nvmf_set_config", 00:14:08.951 "params": { 00:14:08.951 "discovery_filter": "match_any", 00:14:08.951 "admin_cmd_passthru": { 00:14:08.951 "identify_ctrlr": false 00:14:08.951 }, 00:14:08.951 "dhchap_digests": [ 00:14:08.951 "sha256", 00:14:08.951 "sha384", 00:14:08.951 "sha512" 00:14:08.951 ], 00:14:08.951 "dhchap_dhgroups": [ 00:14:08.951 "null", 00:14:08.951 "ffdhe2048", 00:14:08.951 "ffdhe3072", 00:14:08.951 "ffdhe4096", 00:14:08.951 "ffdhe6144", 00:14:08.951 "ffdhe8192" 00:14:08.951 ] 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_set_max_subsystems", 00:14:08.951 "params": { 00:14:08.951 "max_subsystems": 1024 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_set_crdt", 00:14:08.951 "params": { 00:14:08.951 "crdt1": 0, 00:14:08.951 "crdt2": 0, 00:14:08.951 "crdt3": 0 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_create_transport", 00:14:08.951 "params": { 00:14:08.951 "trtype": "TCP", 00:14:08.951 "max_queue_depth": 128, 00:14:08.951 "max_io_qpairs_per_ctrlr": 127, 00:14:08.951 "in_capsule_data_size": 4096, 00:14:08.951 "max_io_size": 131072, 00:14:08.951 "io_unit_size": 131072, 00:14:08.951 "max_aq_depth": 128, 00:14:08.951 "num_shared_buffers": 511, 00:14:08.951 "buf_cache_size": 4294967295, 00:14:08.951 "dif_insert_or_strip": false, 00:14:08.951 "zcopy": false, 00:14:08.951 "c2h_success": false, 00:14:08.951 "sock_priority": 0, 00:14:08.951 "abort_timeout_sec": 1, 00:14:08.951 "ack_timeout": 0, 00:14:08.951 "data_wr_pool_size": 0 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_create_subsystem", 00:14:08.951 "params": { 00:14:08.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.951 "allow_any_host": false, 00:14:08.951 "serial_number": "SPDK00000000000001", 00:14:08.951 "model_number": "SPDK bdev Controller", 00:14:08.951 "max_namespaces": 10, 00:14:08.951 "min_cntlid": 1, 00:14:08.951 "max_cntlid": 65519, 00:14:08.951 "ana_reporting": false 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_subsystem_add_host", 00:14:08.951 "params": { 00:14:08.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.951 "host": "nqn.2016-06.io.spdk:host1", 00:14:08.951 "psk": "key0" 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_subsystem_add_ns", 00:14:08.951 "params": { 00:14:08.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.951 "namespace": { 00:14:08.951 "nsid": 1, 00:14:08.951 "bdev_name": "malloc0", 00:14:08.951 "nguid": "09EA13158D2140BBA4254EAC1371B167", 00:14:08.951 "uuid": "09ea1315-8d21-40bb-a425-4eac1371b167", 00:14:08.951 "no_auto_visible": false 00:14:08.951 } 00:14:08.951 } 00:14:08.951 }, 00:14:08.951 { 00:14:08.951 "method": "nvmf_subsystem_add_listener", 00:14:08.951 "params": { 00:14:08.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.951 "listen_address": { 00:14:08.951 "trtype": "TCP", 00:14:08.951 "adrfam": "IPv4", 00:14:08.951 "traddr": "10.0.0.3", 00:14:08.951 "trsvcid": "4420" 00:14:08.951 }, 00:14:08.951 "secure_channel": true 00:14:08.951 } 00:14:08.951 } 00:14:08.951 ] 00:14:08.951 } 00:14:08.951 ] 00:14:08.951 }' 00:14:08.951 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:09.211 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:09.211 "subsystems": [ 00:14:09.211 { 00:14:09.211 "subsystem": "keyring", 00:14:09.211 "config": [ 00:14:09.211 { 00:14:09.211 "method": "keyring_file_add_key", 00:14:09.211 "params": { 00:14:09.211 "name": "key0", 00:14:09.211 "path": "/tmp/tmp.7sN9koHjlU" 00:14:09.211 } 00:14:09.211 } 00:14:09.211 ] 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "subsystem": "iobuf", 00:14:09.211 "config": [ 00:14:09.211 { 00:14:09.211 "method": "iobuf_set_options", 00:14:09.211 "params": { 00:14:09.211 "small_pool_count": 8192, 00:14:09.211 "large_pool_count": 1024, 00:14:09.211 "small_bufsize": 8192, 00:14:09.211 "large_bufsize": 135168, 00:14:09.211 "enable_numa": false 00:14:09.211 } 00:14:09.211 } 00:14:09.211 ] 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "subsystem": "sock", 00:14:09.211 "config": [ 00:14:09.211 { 00:14:09.211 "method": "sock_set_default_impl", 00:14:09.211 "params": { 00:14:09.211 "impl_name": "uring" 00:14:09.211 } 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "method": "sock_impl_set_options", 00:14:09.211 "params": { 00:14:09.211 "impl_name": "ssl", 00:14:09.211 "recv_buf_size": 4096, 00:14:09.211 "send_buf_size": 4096, 00:14:09.211 "enable_recv_pipe": true, 00:14:09.211 "enable_quickack": false, 00:14:09.211 "enable_placement_id": 0, 00:14:09.211 "enable_zerocopy_send_server": true, 00:14:09.211 "enable_zerocopy_send_client": false, 00:14:09.211 "zerocopy_threshold": 0, 00:14:09.211 "tls_version": 0, 00:14:09.211 "enable_ktls": false 00:14:09.211 } 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "method": "sock_impl_set_options", 00:14:09.211 "params": { 00:14:09.211 "impl_name": "posix", 00:14:09.211 "recv_buf_size": 2097152, 00:14:09.211 "send_buf_size": 2097152, 00:14:09.211 "enable_recv_pipe": true, 00:14:09.211 "enable_quickack": false, 00:14:09.211 "enable_placement_id": 0, 00:14:09.211 "enable_zerocopy_send_server": true, 00:14:09.211 "enable_zerocopy_send_client": false, 00:14:09.211 "zerocopy_threshold": 0, 00:14:09.211 "tls_version": 0, 00:14:09.211 "enable_ktls": false 00:14:09.211 } 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "method": "sock_impl_set_options", 00:14:09.211 "params": { 00:14:09.211 "impl_name": "uring", 00:14:09.211 "recv_buf_size": 2097152, 00:14:09.211 "send_buf_size": 2097152, 00:14:09.211 "enable_recv_pipe": true, 00:14:09.211 "enable_quickack": false, 00:14:09.211 "enable_placement_id": 0, 00:14:09.211 "enable_zerocopy_send_server": false, 00:14:09.211 "enable_zerocopy_send_client": false, 00:14:09.211 "zerocopy_threshold": 0, 00:14:09.211 "tls_version": 0, 00:14:09.211 "enable_ktls": false 00:14:09.211 } 00:14:09.211 } 00:14:09.211 ] 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "subsystem": "vmd", 00:14:09.211 "config": [] 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "subsystem": "accel", 00:14:09.211 "config": [ 00:14:09.211 { 00:14:09.211 "method": "accel_set_options", 00:14:09.211 "params": { 00:14:09.211 "small_cache_size": 128, 00:14:09.211 "large_cache_size": 16, 00:14:09.211 "task_count": 2048, 00:14:09.211 "sequence_count": 2048, 00:14:09.211 "buf_count": 2048 00:14:09.211 } 00:14:09.211 } 00:14:09.211 ] 00:14:09.211 }, 00:14:09.211 { 00:14:09.211 "subsystem": "bdev", 00:14:09.211 "config": [ 00:14:09.211 { 00:14:09.211 "method": "bdev_set_options", 00:14:09.211 "params": { 00:14:09.211 "bdev_io_pool_size": 65535, 00:14:09.211 "bdev_io_cache_size": 256, 00:14:09.211 "bdev_auto_examine": true, 00:14:09.211 "iobuf_small_cache_size": 128, 00:14:09.212 "iobuf_large_cache_size": 16 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_raid_set_options", 00:14:09.212 "params": { 00:14:09.212 "process_window_size_kb": 1024, 00:14:09.212 "process_max_bandwidth_mb_sec": 0 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_iscsi_set_options", 00:14:09.212 "params": { 00:14:09.212 "timeout_sec": 30 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_nvme_set_options", 00:14:09.212 "params": { 00:14:09.212 "action_on_timeout": "none", 00:14:09.212 "timeout_us": 0, 00:14:09.212 "timeout_admin_us": 0, 00:14:09.212 "keep_alive_timeout_ms": 10000, 00:14:09.212 "arbitration_burst": 0, 00:14:09.212 "low_priority_weight": 0, 00:14:09.212 "medium_priority_weight": 0, 00:14:09.212 "high_priority_weight": 0, 00:14:09.212 "nvme_adminq_poll_period_us": 10000, 00:14:09.212 "nvme_ioq_poll_period_us": 0, 00:14:09.212 "io_queue_requests": 512, 00:14:09.212 "delay_cmd_submit": true, 00:14:09.212 "transport_retry_count": 4, 00:14:09.212 "bdev_retry_count": 3, 00:14:09.212 "transport_ack_timeout": 0, 00:14:09.212 "ctrlr_loss_timeout_sec": 0, 00:14:09.212 "reconnect_delay_sec": 0, 00:14:09.212 "fast_io_fail_timeout_sec": 0, 00:14:09.212 "disable_auto_failback": false, 00:14:09.212 "generate_uuids": false, 00:14:09.212 "transport_tos": 0, 00:14:09.212 "nvme_error_stat": false, 00:14:09.212 "rdma_srq_size": 0, 00:14:09.212 "io_path_stat": false, 00:14:09.212 "allow_accel_sequence": false, 00:14:09.212 "rdma_max_cq_size": 0, 00:14:09.212 "rdma_cm_event_timeout_ms": 0, 00:14:09.212 "dhchap_digests": [ 00:14:09.212 "sha256", 00:14:09.212 "sha384", 00:14:09.212 "sha512" 00:14:09.212 ], 00:14:09.212 "dhchap_dhgroups": [ 00:14:09.212 "null", 00:14:09.212 "ffdhe2048", 00:14:09.212 "ffdhe3072", 00:14:09.212 "ffdhe4096", 00:14:09.212 "ffdhe6144", 00:14:09.212 "ffdhe8192" 00:14:09.212 ] 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_nvme_attach_controller", 00:14:09.212 "params": { 00:14:09.212 "name": "TLSTEST", 00:14:09.212 "trtype": "TCP", 00:14:09.212 "adrfam": "IPv4", 00:14:09.212 "traddr": "10.0.0.3", 00:14:09.212 "trsvcid": "4420", 00:14:09.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.212 "prchk_reftag": false, 00:14:09.212 "prchk_guard": false, 00:14:09.212 "ctrlr_loss_timeout_sec": 0, 00:14:09.212 "reconnect_delay_sec": 0, 00:14:09.212 "fast_io_fail_timeout_sec": 0, 00:14:09.212 "psk": "key0", 00:14:09.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.212 "hdgst": false, 00:14:09.212 "ddgst": false, 00:14:09.212 "multipath": "multipath" 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_nvme_set_hotplug", 00:14:09.212 "params": { 00:14:09.212 "period_us": 100000, 00:14:09.212 "enable": false 00:14:09.212 } 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "method": "bdev_wait_for_examine" 00:14:09.212 } 00:14:09.212 ] 00:14:09.212 }, 00:14:09.212 { 00:14:09.212 "subsystem": "nbd", 00:14:09.212 "config": [] 00:14:09.212 } 00:14:09.212 ] 00:14:09.212 }' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72782 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72782 ']' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72782 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72782 00:14:09.212 killing process with pid 72782 00:14:09.212 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.212 00:14:09.212 Latency(us) 00:14:09.212 [2024-12-10T14:19:34.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.212 [2024-12-10T14:19:34.049Z] =================================================================================================================== 00:14:09.212 [2024-12-10T14:19:34.049Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72782' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72782 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72782 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72721 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72721 ']' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72721 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.212 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72721 00:14:09.212 killing process with pid 72721 00:14:09.212 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.212 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.212 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72721' 00:14:09.212 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72721 00:14:09.212 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72721 00:14:09.472 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:09.472 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.472 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.472 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:09.472 "subsystems": [ 00:14:09.472 { 00:14:09.472 "subsystem": "keyring", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "keyring_file_add_key", 00:14:09.472 "params": { 00:14:09.472 "name": "key0", 00:14:09.472 "path": "/tmp/tmp.7sN9koHjlU" 00:14:09.472 } 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "iobuf", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "iobuf_set_options", 00:14:09.472 "params": { 00:14:09.472 "small_pool_count": 8192, 00:14:09.472 "large_pool_count": 1024, 00:14:09.472 "small_bufsize": 8192, 00:14:09.472 "large_bufsize": 135168, 00:14:09.472 "enable_numa": false 00:14:09.472 } 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "sock", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "sock_set_default_impl", 00:14:09.472 "params": { 00:14:09.472 "impl_name": "uring" 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "sock_impl_set_options", 00:14:09.472 "params": { 00:14:09.472 "impl_name": "ssl", 00:14:09.472 "recv_buf_size": 4096, 00:14:09.472 "send_buf_size": 4096, 00:14:09.472 "enable_recv_pipe": true, 00:14:09.472 "enable_quickack": false, 00:14:09.472 "enable_placement_id": 0, 00:14:09.472 "enable_zerocopy_send_server": true, 00:14:09.472 "enable_zerocopy_send_client": false, 00:14:09.472 "zerocopy_threshold": 0, 00:14:09.472 "tls_version": 0, 00:14:09.472 "enable_ktls": false 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "sock_impl_set_options", 00:14:09.472 "params": { 00:14:09.472 "impl_name": "posix", 00:14:09.472 "recv_buf_size": 2097152, 00:14:09.472 "send_buf_size": 2097152, 00:14:09.472 "enable_recv_pipe": true, 00:14:09.472 "enable_quickack": false, 00:14:09.472 "enable_placement_id": 0, 00:14:09.472 "enable_zerocopy_send_server": true, 00:14:09.472 "enable_zerocopy_send_client": false, 00:14:09.472 "zerocopy_threshold": 0, 00:14:09.472 "tls_version": 0, 00:14:09.472 "enable_ktls": false 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "sock_impl_set_options", 00:14:09.472 "params": { 00:14:09.472 "impl_name": "uring", 00:14:09.472 "recv_buf_size": 2097152, 00:14:09.472 "send_buf_size": 2097152, 00:14:09.472 "enable_recv_pipe": true, 00:14:09.472 "enable_quickack": false, 00:14:09.472 "enable_placement_id": 0, 00:14:09.472 "enable_zerocopy_send_server": false, 00:14:09.472 "enable_zerocopy_send_client": false, 00:14:09.472 "zerocopy_threshold": 0, 00:14:09.472 "tls_version": 0, 00:14:09.472 "enable_ktls": false 00:14:09.472 } 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "vmd", 00:14:09.472 "config": [] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "accel", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "accel_set_options", 00:14:09.472 "params": { 00:14:09.472 "small_cache_size": 128, 00:14:09.472 "large_cache_size": 16, 00:14:09.472 "task_count": 2048, 00:14:09.472 "sequence_count": 2048, 00:14:09.472 "buf_count": 2048 00:14:09.472 } 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "bdev", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "bdev_set_options", 00:14:09.472 "params": { 00:14:09.472 "bdev_io_pool_size": 65535, 00:14:09.472 "bdev_io_cache_size": 256, 00:14:09.472 "bdev_auto_examine": true, 00:14:09.472 "iobuf_small_cache_size": 128, 00:14:09.472 "iobuf_large_cache_size": 16 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_raid_set_options", 00:14:09.472 "params": { 00:14:09.472 "process_window_size_kb": 1024, 00:14:09.472 "process_max_bandwidth_mb_sec": 0 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_iscsi_set_options", 00:14:09.472 "params": { 00:14:09.472 "timeout_sec": 30 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_nvme_set_options", 00:14:09.472 "params": { 00:14:09.472 "action_on_timeout": "none", 00:14:09.472 "timeout_us": 0, 00:14:09.472 "timeout_admin_us": 0, 00:14:09.472 "keep_alive_timeout_ms": 10000, 00:14:09.472 "arbitration_burst": 0, 00:14:09.472 "low_priority_weight": 0, 00:14:09.472 "medium_priority_weight": 0, 00:14:09.472 "high_priority_weight": 0, 00:14:09.472 "nvme_adminq_poll_period_us": 10000, 00:14:09.472 "nvme_ioq_poll_period_us": 0, 00:14:09.472 "io_queue_requests": 0, 00:14:09.472 "delay_cmd_submit": true, 00:14:09.472 "transport_retry_count": 4, 00:14:09.472 "bdev_retry_count": 3, 00:14:09.472 "transport_ack_timeout": 0, 00:14:09.472 "ctrlr_loss_timeout_sec": 0, 00:14:09.472 "reconnect_delay_sec": 0, 00:14:09.472 "fast_io_fail_timeout_sec": 0, 00:14:09.472 "disable_auto_failback": false, 00:14:09.472 "generate_uuids": false, 00:14:09.472 "transport_tos": 0, 00:14:09.472 "nvme_error_stat": false, 00:14:09.472 "rdma_srq_size": 0, 00:14:09.472 "io_path_stat": false, 00:14:09.472 "allow_accel_sequence": false, 00:14:09.472 "rdma_max_cq_size": 0, 00:14:09.472 "rdma_cm_event_timeout_ms": 0, 00:14:09.472 "dhchap_digests": [ 00:14:09.472 "sha256", 00:14:09.472 "sha384", 00:14:09.472 "sha512" 00:14:09.472 ], 00:14:09.472 "dhchap_dhgroups": [ 00:14:09.472 "null", 00:14:09.472 "ffdhe2048", 00:14:09.472 "ffdhe3072", 00:14:09.472 "ffdhe4096", 00:14:09.472 "ffdhe6144", 00:14:09.472 "ffdhe8192" 00:14:09.472 ] 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_nvme_set_hotplug", 00:14:09.472 "params": { 00:14:09.472 "period_us": 100000, 00:14:09.472 "enable": false 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_malloc_create", 00:14:09.472 "params": { 00:14:09.472 "name": "malloc0", 00:14:09.472 "num_blocks": 8192, 00:14:09.472 "block_size": 4096, 00:14:09.472 "physical_block_size": 4096, 00:14:09.472 "uuid": "09ea1315-8d21-40bb-a425-4eac1371b167", 00:14:09.472 "optimal_io_boundary": 0, 00:14:09.472 "md_size": 0, 00:14:09.472 "dif_type": 0, 00:14:09.472 "dif_is_head_of_md": false, 00:14:09.472 "dif_pi_format": 0 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "bdev_wait_for_examine" 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "nbd", 00:14:09.472 "config": [] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "scheduler", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "framework_set_scheduler", 00:14:09.472 "params": { 00:14:09.472 "name": "static" 00:14:09.472 } 00:14:09.472 } 00:14:09.472 ] 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "subsystem": "nvmf", 00:14:09.472 "config": [ 00:14:09.472 { 00:14:09.472 "method": "nvmf_set_config", 00:14:09.472 "params": { 00:14:09.472 "discovery_filter": "match_any", 00:14:09.472 "admin_cmd_passthru": { 00:14:09.472 "identify_ctrlr": false 00:14:09.472 }, 00:14:09.472 "dhchap_digests": [ 00:14:09.472 "sha256", 00:14:09.472 "sha384", 00:14:09.472 "sha512" 00:14:09.472 ], 00:14:09.472 "dhchap_dhgroups": [ 00:14:09.472 "null", 00:14:09.472 "ffdhe2048", 00:14:09.472 "ffdhe3072", 00:14:09.472 "ffdhe4096", 00:14:09.472 "ffdhe6144", 00:14:09.472 "ffdhe8192" 00:14:09.472 ] 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "nvmf_set_max_subsystems", 00:14:09.472 "params": { 00:14:09.472 "max_subsystems": 1024 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "nvmf_set_crdt", 00:14:09.472 "params": { 00:14:09.472 "crdt1": 0, 00:14:09.472 "crdt2": 0, 00:14:09.472 "crdt3": 0 00:14:09.472 } 00:14:09.472 }, 00:14:09.472 { 00:14:09.472 "method": "nvmf_create_transport", 00:14:09.472 "params": { 00:14:09.473 "trtype": "TCP", 00:14:09.473 "max_queue_depth": 128, 00:14:09.473 "max_io_qpairs_per_ctrlr": 127, 00:14:09.473 "in_capsule_data_size": 4096, 00:14:09.473 "max_io_size": 131072, 00:14:09.473 "io_unit_size": 131072, 00:14:09.473 "max_aq_depth": 128, 00:14:09.473 "num_shared_buffers": 511, 00:14:09.473 "buf_cache_size": 4294967295, 00:14:09.473 "dif_insert_or_strip": false, 00:14:09.473 "zcopy": false, 00:14:09.473 "c2h_success": false, 00:14:09.473 "sock_priority": 0, 00:14:09.473 "abort_timeout_sec": 1, 00:14:09.473 "ack_timeout": 0, 00:14:09.473 "data_wr_pool_size": 0 00:14:09.473 } 00:14:09.473 }, 00:14:09.473 { 00:14:09.473 "method": "nvmf_create_subsystem", 00:14:09.473 "params": { 00:14:09.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.473 "allow_any_host": false, 00:14:09.473 "serial_number": "SPDK00000000000001", 00:14:09.473 "model_number": "SPDK bdev Controller", 00:14:09.473 "max_namespaces": 10, 00:14:09.473 "min_cntlid": 1, 00:14:09.473 "max_cntlid": 65519, 00:14:09.473 "ana_reporting": false 00:14:09.473 } 00:14:09.473 }, 00:14:09.473 { 00:14:09.473 "method": "nvmf_subsystem_add_host", 00:14:09.473 "params": { 00:14:09.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.473 "host": "nqn.2016-06.io.spdk:host1", 00:14:09.473 "psk": "key0" 00:14:09.473 } 00:14:09.473 }, 00:14:09.473 { 00:14:09.473 "method": "nvmf_subsystem_add_ns", 00:14:09.473 "params": { 00:14:09.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.473 "namespace": { 00:14:09.473 "nsid": 1, 00:14:09.473 "bdev_name": "malloc0", 00:14:09.473 "nguid": "09EA13158D2140BBA4254EAC1371B167", 00:14:09.473 "uuid": "09ea1315-8d21-40bb-a425-4eac1371b167", 00:14:09.473 "no_auto_visible": false 00:14:09.473 } 00:14:09.473 } 00:14:09.473 }, 00:14:09.473 { 00:14:09.473 "method": "nvmf_subsystem_add_listener", 00:14:09.473 "params": { 00:14:09.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.473 "listen_address": { 00:14:09.473 "trtype": "TCP", 00:14:09.473 "adrfam": "IPv4", 00:14:09.473 "traddr": "10.0.0.3", 00:14:09.473 "trsvcid": "4420" 00:14:09.473 }, 00:14:09.473 "secure_channel": true 00:14:09.473 } 00:14:09.473 } 00:14:09.473 ] 00:14:09.473 } 00:14:09.473 ] 00:14:09.473 }' 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72818 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72818 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72818 ']' 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.473 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.473 [2024-12-10 14:19:34.206886] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:09.473 [2024-12-10 14:19:34.207027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.732 [2024-12-10 14:19:34.348751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.732 [2024-12-10 14:19:34.376503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.732 [2024-12-10 14:19:34.376557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.732 [2024-12-10 14:19:34.376583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.732 [2024-12-10 14:19:34.376590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.732 [2024-12-10 14:19:34.376596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.732 [2024-12-10 14:19:34.376903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.732 [2024-12-10 14:19:34.518929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.991 [2024-12-10 14:19:34.575784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.991 [2024-12-10 14:19:34.607732] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:09.991 [2024-12-10 14:19:34.607933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72851 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72851 /var/tmp/bdevperf.sock 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72851 ']' 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:10.627 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:10.627 "subsystems": [ 00:14:10.627 { 00:14:10.627 "subsystem": "keyring", 00:14:10.627 "config": [ 00:14:10.627 { 00:14:10.627 "method": "keyring_file_add_key", 00:14:10.627 "params": { 00:14:10.627 "name": "key0", 00:14:10.627 "path": "/tmp/tmp.7sN9koHjlU" 00:14:10.627 } 00:14:10.627 } 00:14:10.627 ] 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "subsystem": "iobuf", 00:14:10.627 "config": [ 00:14:10.627 { 00:14:10.627 "method": "iobuf_set_options", 00:14:10.627 "params": { 00:14:10.627 "small_pool_count": 8192, 00:14:10.627 "large_pool_count": 1024, 00:14:10.627 "small_bufsize": 8192, 00:14:10.627 "large_bufsize": 135168, 00:14:10.627 "enable_numa": false 00:14:10.627 } 00:14:10.627 } 00:14:10.627 ] 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "subsystem": "sock", 00:14:10.627 "config": [ 00:14:10.627 { 00:14:10.627 "method": "sock_set_default_impl", 00:14:10.627 "params": { 00:14:10.627 "impl_name": "uring" 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "sock_impl_set_options", 00:14:10.627 "params": { 00:14:10.627 "impl_name": "ssl", 00:14:10.627 "recv_buf_size": 4096, 00:14:10.627 "send_buf_size": 4096, 00:14:10.627 "enable_recv_pipe": true, 00:14:10.627 "enable_quickack": false, 00:14:10.627 "enable_placement_id": 0, 00:14:10.627 "enable_zerocopy_send_server": true, 00:14:10.627 "enable_zerocopy_send_client": false, 00:14:10.627 "zerocopy_threshold": 0, 00:14:10.627 "tls_version": 0, 00:14:10.627 "enable_ktls": false 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "sock_impl_set_options", 00:14:10.627 "params": { 00:14:10.627 "impl_name": "posix", 00:14:10.627 "recv_buf_size": 2097152, 00:14:10.627 "send_buf_size": 2097152, 00:14:10.627 "enable_recv_pipe": true, 00:14:10.627 "enable_quickack": false, 00:14:10.627 "enable_placement_id": 0, 00:14:10.627 "enable_zerocopy_send_server": true, 00:14:10.627 "enable_zerocopy_send_client": false, 00:14:10.627 "zerocopy_threshold": 0, 00:14:10.627 "tls_version": 0, 00:14:10.627 "enable_ktls": false 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "sock_impl_set_options", 00:14:10.627 "params": { 00:14:10.627 "impl_name": "uring", 00:14:10.627 "recv_buf_size": 2097152, 00:14:10.627 "send_buf_size": 2097152, 00:14:10.627 "enable_recv_pipe": true, 00:14:10.627 "enable_quickack": false, 00:14:10.627 "enable_placement_id": 0, 00:14:10.627 "enable_zerocopy_send_server": false, 00:14:10.627 "enable_zerocopy_send_client": false, 00:14:10.627 "zerocopy_threshold": 0, 00:14:10.627 "tls_version": 0, 00:14:10.627 "enable_ktls": false 00:14:10.627 } 00:14:10.627 } 00:14:10.627 ] 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "subsystem": "vmd", 00:14:10.627 "config": [] 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "subsystem": "accel", 00:14:10.627 "config": [ 00:14:10.627 { 00:14:10.627 "method": "accel_set_options", 00:14:10.627 "params": { 00:14:10.627 "small_cache_size": 128, 00:14:10.627 "large_cache_size": 16, 00:14:10.627 "task_count": 2048, 00:14:10.627 "sequence_count": 2048, 00:14:10.627 "buf_count": 2048 00:14:10.627 } 00:14:10.627 } 00:14:10.627 ] 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "subsystem": "bdev", 00:14:10.627 "config": [ 00:14:10.627 { 00:14:10.627 "method": "bdev_set_options", 00:14:10.627 "params": { 00:14:10.627 "bdev_io_pool_size": 65535, 00:14:10.627 "bdev_io_cache_size": 256, 00:14:10.627 "bdev_auto_examine": true, 00:14:10.627 "iobuf_small_cache_size": 128, 00:14:10.627 "iobuf_large_cache_size": 16 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "bdev_raid_set_options", 00:14:10.627 "params": { 00:14:10.627 "process_window_size_kb": 1024, 00:14:10.627 "process_max_bandwidth_mb_sec": 0 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "bdev_iscsi_set_options", 00:14:10.627 "params": { 00:14:10.627 "timeout_sec": 30 00:14:10.627 } 00:14:10.627 }, 00:14:10.627 { 00:14:10.627 "method": "bdev_nvme_set_options", 00:14:10.627 "params": { 00:14:10.627 "action_on_timeout": "none", 00:14:10.627 "timeout_us": 0, 00:14:10.627 "timeout_admin_us": 0, 00:14:10.627 "keep_alive_timeout_ms": 10000, 00:14:10.627 "arbitration_burst": 0, 00:14:10.627 "low_priority_weight": 0, 00:14:10.627 "medium_priority_weight": 0, 00:14:10.627 "high_priority_weight": 0, 00:14:10.627 "nvme_adminq_poll_period_us": 10000, 00:14:10.627 "nvme_ioq_poll_period_us": 0, 00:14:10.627 "io_queue_requests": 512, 00:14:10.627 "delay_cmd_submit": true, 00:14:10.627 "transport_retry_count": 4, 00:14:10.627 "bdev_retry_count": 3, 00:14:10.627 "transport_ack_timeout": 0, 00:14:10.627 "ctrlr_loss_timeout_sec": 0, 00:14:10.627 "reconnect_delay_sec": 0, 00:14:10.627 "fast_io_fail_timeout_sec": 0, 00:14:10.627 "disable_auto_failback": false, 00:14:10.627 "generate_uuids": false, 00:14:10.627 "transport_tos": 0, 00:14:10.627 "nvme_error_stat": false, 00:14:10.627 "rdma_srq_size": 0, 00:14:10.627 "io_path_stat": false, 00:14:10.627 "allow_accel_sequence": false, 00:14:10.627 "rdma_max_cq_size": 0, 00:14:10.627 "rdma_cm_event_timeout_ms": 0, 00:14:10.627 "dhchap_digests": [ 00:14:10.627 "sha256", 00:14:10.627 "sha384", 00:14:10.627 "sha512" 00:14:10.628 ], 00:14:10.628 "dhchap_dhgroups": [ 00:14:10.628 "null", 00:14:10.628 "ffdhe2048", 00:14:10.628 "ffdhe3072", 00:14:10.628 "ffdhe4096", 00:14:10.628 "ffdhe6144", 00:14:10.628 "ffdhe8192" 00:14:10.628 ] 00:14:10.628 } 00:14:10.628 }, 00:14:10.628 { 00:14:10.628 "method": "bdev_nvme_attach_controller", 00:14:10.628 "params": { 00:14:10.628 "name": "TLSTEST", 00:14:10.628 "trtype": "TCP", 00:14:10.628 "adrfam": "IPv4", 00:14:10.628 "traddr": "10.0.0.3", 00:14:10.628 "trsvcid": "4420", 00:14:10.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.628 "prchk_reftag": false, 00:14:10.628 "prchk_guard": false, 00:14:10.628 "ctrlr_loss_timeout_sec": 0, 00:14:10.628 "reconnect_delay_sec": 0, 00:14:10.628 "fast_io_fail_timeout_sec": 0, 00:14:10.628 "psk": "key0", 00:14:10.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.628 "hdgst": false, 00:14:10.628 "ddgst": false, 00:14:10.628 "multipath": "multipath" 00:14:10.628 } 00:14:10.628 }, 00:14:10.628 { 00:14:10.628 "method": "bdev_nvme_set_hotplug", 00:14:10.628 "params": { 00:14:10.628 "period_us": 100000, 00:14:10.628 "enable": false 00:14:10.628 } 00:14:10.628 }, 00:14:10.628 { 00:14:10.628 "method": "bdev_wait_for_examine" 00:14:10.628 } 00:14:10.628 ] 00:14:10.628 }, 00:14:10.628 { 00:14:10.628 "subsystem": "nbd", 00:14:10.628 "config": [] 00:14:10.628 } 00:14:10.628 ] 00:14:10.628 }' 00:14:10.628 [2024-12-10 14:19:35.275409] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:10.628 [2024-12-10 14:19:35.275697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72851 ] 00:14:10.628 [2024-12-10 14:19:35.424623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.628 [2024-12-10 14:19:35.453992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.886 [2024-12-10 14:19:35.563098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.887 [2024-12-10 14:19:35.594600] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.824 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.824 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:11.824 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:11.824 Running I/O for 10 seconds... 00:14:13.698 4352.00 IOPS, 17.00 MiB/s [2024-12-10T14:19:39.471Z] 4352.00 IOPS, 17.00 MiB/s [2024-12-10T14:19:40.848Z] 4417.00 IOPS, 17.25 MiB/s [2024-12-10T14:19:41.785Z] 4431.25 IOPS, 17.31 MiB/s [2024-12-10T14:19:42.723Z] 4457.60 IOPS, 17.41 MiB/s [2024-12-10T14:19:43.660Z] 4475.50 IOPS, 17.48 MiB/s [2024-12-10T14:19:44.597Z] 4480.71 IOPS, 17.50 MiB/s [2024-12-10T14:19:45.537Z] 4489.50 IOPS, 17.54 MiB/s [2024-12-10T14:19:46.504Z] 4494.44 IOPS, 17.56 MiB/s [2024-12-10T14:19:46.504Z] 4494.70 IOPS, 17.56 MiB/s 00:14:21.667 Latency(us) 00:14:21.667 [2024-12-10T14:19:46.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.667 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:21.667 Verification LBA range: start 0x0 length 0x2000 00:14:21.667 TLSTESTn1 : 10.02 4499.16 17.57 0.00 0.00 28396.27 6196.13 22639.71 00:14:21.667 [2024-12-10T14:19:46.504Z] =================================================================================================================== 00:14:21.667 [2024-12-10T14:19:46.504Z] Total : 4499.16 17.57 0.00 0.00 28396.27 6196.13 22639.71 00:14:21.667 { 00:14:21.667 "results": [ 00:14:21.667 { 00:14:21.667 "job": "TLSTESTn1", 00:14:21.667 "core_mask": "0x4", 00:14:21.667 "workload": "verify", 00:14:21.667 "status": "finished", 00:14:21.667 "verify_range": { 00:14:21.667 "start": 0, 00:14:21.667 "length": 8192 00:14:21.667 }, 00:14:21.667 "queue_depth": 128, 00:14:21.667 "io_size": 4096, 00:14:21.667 "runtime": 10.017199, 00:14:21.667 "iops": 4499.161891462873, 00:14:21.667 "mibps": 17.57485113852685, 00:14:21.667 "io_failed": 0, 00:14:21.667 "io_timeout": 0, 00:14:21.667 "avg_latency_us": 28396.269308595507, 00:14:21.667 "min_latency_us": 6196.130909090909, 00:14:21.667 "max_latency_us": 22639.70909090909 00:14:21.667 } 00:14:21.667 ], 00:14:21.667 "core_count": 1 00:14:21.667 } 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72851 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72851 ']' 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72851 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.667 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72851 00:14:21.926 killing process with pid 72851 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72851' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72851 00:14:21.926 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.926 00:14:21.926 Latency(us) 00:14:21.926 [2024-12-10T14:19:46.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.926 [2024-12-10T14:19:46.763Z] =================================================================================================================== 00:14:21.926 [2024-12-10T14:19:46.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72851 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72818 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72818 ']' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72818 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72818 00:14:21.926 killing process with pid 72818 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72818' 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72818 00:14:21.926 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72818 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72984 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72984 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72984 ']' 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.185 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.185 [2024-12-10 14:19:46.880075] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:22.185 [2024-12-10 14:19:46.880688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.443 [2024-12-10 14:19:47.036235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.443 [2024-12-10 14:19:47.075212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.443 [2024-12-10 14:19:47.075417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.443 [2024-12-10 14:19:47.075587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.443 [2024-12-10 14:19:47.075734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.443 [2024-12-10 14:19:47.075751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.444 [2024-12-10 14:19:47.076119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.444 [2024-12-10 14:19:47.110647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.010 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.010 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:23.010 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.010 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.010 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.269 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.269 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.7sN9koHjlU 00:14:23.269 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7sN9koHjlU 00:14:23.269 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:23.528 [2024-12-10 14:19:48.127772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.528 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:23.786 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:24.045 [2024-12-10 14:19:48.639882] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:24.045 [2024-12-10 14:19:48.640333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:24.045 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:24.045 malloc0 00:14:24.304 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:24.304 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:24.563 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=73045 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 73045 /var/tmp/bdevperf.sock 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73045 ']' 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.822 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.081 [2024-12-10 14:19:49.658353] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:25.081 [2024-12-10 14:19:49.658705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73045 ] 00:14:25.081 [2024-12-10 14:19:49.807215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.081 [2024-12-10 14:19:49.847462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.081 [2024-12-10 14:19:49.880754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.340 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.340 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:25.340 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:25.340 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:25.598 [2024-12-10 14:19:50.363366] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.857 nvme0n1 00:14:25.857 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:25.857 Running I/O for 1 seconds... 00:14:26.794 4352.00 IOPS, 17.00 MiB/s 00:14:26.794 Latency(us) 00:14:26.794 [2024-12-10T14:19:51.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.794 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:26.794 Verification LBA range: start 0x0 length 0x2000 00:14:26.794 nvme0n1 : 1.02 4376.12 17.09 0.00 0.00 28941.49 7179.17 19779.96 00:14:26.794 [2024-12-10T14:19:51.631Z] =================================================================================================================== 00:14:26.794 [2024-12-10T14:19:51.631Z] Total : 4376.12 17.09 0.00 0.00 28941.49 7179.17 19779.96 00:14:26.794 { 00:14:26.794 "results": [ 00:14:26.794 { 00:14:26.794 "job": "nvme0n1", 00:14:26.794 "core_mask": "0x2", 00:14:26.794 "workload": "verify", 00:14:26.794 "status": "finished", 00:14:26.794 "verify_range": { 00:14:26.794 "start": 0, 00:14:26.794 "length": 8192 00:14:26.794 }, 00:14:26.794 "queue_depth": 128, 00:14:26.794 "io_size": 4096, 00:14:26.794 "runtime": 1.023739, 00:14:26.794 "iops": 4376.11539660011, 00:14:26.794 "mibps": 17.09420076796918, 00:14:26.794 "io_failed": 0, 00:14:26.794 "io_timeout": 0, 00:14:26.794 "avg_latency_us": 28941.494857142854, 00:14:26.794 "min_latency_us": 7179.170909090909, 00:14:26.794 "max_latency_us": 19779.956363636364 00:14:26.794 } 00:14:26.794 ], 00:14:26.794 "core_count": 1 00:14:26.794 } 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 73045 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73045 ']' 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73045 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.794 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73045 00:14:27.053 killing process with pid 73045 00:14:27.053 Received shutdown signal, test time was about 1.000000 seconds 00:14:27.053 00:14:27.053 Latency(us) 00:14:27.053 [2024-12-10T14:19:51.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.053 [2024-12-10T14:19:51.890Z] =================================================================================================================== 00:14:27.053 [2024-12-10T14:19:51.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73045' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73045 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73045 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72984 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72984 ']' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72984 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72984 00:14:27.053 killing process with pid 72984 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72984' 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72984 00:14:27.053 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72984 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73083 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73083 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73083 ']' 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.313 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.313 [2024-12-10 14:19:52.004835] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:27.313 [2024-12-10 14:19:52.005112] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.572 [2024-12-10 14:19:52.157388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.572 [2024-12-10 14:19:52.185180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.572 [2024-12-10 14:19:52.185234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.572 [2024-12-10 14:19:52.185260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.572 [2024-12-10 14:19:52.185267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.572 [2024-12-10 14:19:52.185273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.572 [2024-12-10 14:19:52.185539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.572 [2024-12-10 14:19:52.213210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 [2024-12-10 14:19:52.307498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.572 malloc0 00:14:27.572 [2024-12-10 14:19:52.333723] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.572 [2024-12-10 14:19:52.333917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=73102 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 73102 /var/tmp/bdevperf.sock 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73102 ']' 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.572 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.832 [2024-12-10 14:19:52.408545] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:27.832 [2024-12-10 14:19:52.408815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73102 ] 00:14:27.832 [2024-12-10 14:19:52.551777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.832 [2024-12-10 14:19:52.584143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.832 [2024-12-10 14:19:52.612790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.091 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.091 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.091 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7sN9koHjlU 00:14:28.091 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:28.349 [2024-12-10 14:19:53.132297] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.607 nvme0n1 00:14:28.607 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.607 Running I/O for 1 seconds... 00:14:29.543 4352.00 IOPS, 17.00 MiB/s 00:14:29.543 Latency(us) 00:14:29.543 [2024-12-10T14:19:54.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.543 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.543 Verification LBA range: start 0x0 length 0x2000 00:14:29.543 nvme0n1 : 1.02 4383.89 17.12 0.00 0.00 28911.06 7000.44 19184.17 00:14:29.543 [2024-12-10T14:19:54.380Z] =================================================================================================================== 00:14:29.543 [2024-12-10T14:19:54.380Z] Total : 4383.89 17.12 0.00 0.00 28911.06 7000.44 19184.17 00:14:29.543 { 00:14:29.543 "results": [ 00:14:29.543 { 00:14:29.543 "job": "nvme0n1", 00:14:29.543 "core_mask": "0x2", 00:14:29.543 "workload": "verify", 00:14:29.543 "status": "finished", 00:14:29.543 "verify_range": { 00:14:29.543 "start": 0, 00:14:29.543 "length": 8192 00:14:29.543 }, 00:14:29.543 "queue_depth": 128, 00:14:29.543 "io_size": 4096, 00:14:29.543 "runtime": 1.021923, 00:14:29.543 "iops": 4383.891937063751, 00:14:29.543 "mibps": 17.12457787915528, 00:14:29.543 "io_failed": 0, 00:14:29.543 "io_timeout": 0, 00:14:29.543 "avg_latency_us": 28911.06077922078, 00:14:29.543 "min_latency_us": 7000.436363636363, 00:14:29.543 "max_latency_us": 19184.174545454545 00:14:29.543 } 00:14:29.543 ], 00:14:29.543 "core_count": 1 00:14:29.543 } 00:14:29.543 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:29.543 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.543 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.802 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.802 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:29.802 "subsystems": [ 00:14:29.802 { 00:14:29.802 "subsystem": "keyring", 00:14:29.802 "config": [ 00:14:29.802 { 00:14:29.802 "method": "keyring_file_add_key", 00:14:29.802 "params": { 00:14:29.802 "name": "key0", 00:14:29.802 "path": "/tmp/tmp.7sN9koHjlU" 00:14:29.802 } 00:14:29.802 } 00:14:29.802 ] 00:14:29.802 }, 00:14:29.802 { 00:14:29.802 "subsystem": "iobuf", 00:14:29.802 "config": [ 00:14:29.802 { 00:14:29.802 "method": "iobuf_set_options", 00:14:29.802 "params": { 00:14:29.802 "small_pool_count": 8192, 00:14:29.802 "large_pool_count": 1024, 00:14:29.802 "small_bufsize": 8192, 00:14:29.802 "large_bufsize": 135168, 00:14:29.802 "enable_numa": false 00:14:29.802 } 00:14:29.802 } 00:14:29.802 ] 00:14:29.802 }, 00:14:29.802 { 00:14:29.802 "subsystem": "sock", 00:14:29.802 "config": [ 00:14:29.802 { 00:14:29.802 "method": "sock_set_default_impl", 00:14:29.802 "params": { 00:14:29.802 "impl_name": "uring" 00:14:29.802 } 00:14:29.802 }, 00:14:29.802 { 00:14:29.802 "method": "sock_impl_set_options", 00:14:29.802 "params": { 00:14:29.802 "impl_name": "ssl", 00:14:29.802 "recv_buf_size": 4096, 00:14:29.802 "send_buf_size": 4096, 00:14:29.802 "enable_recv_pipe": true, 00:14:29.802 "enable_quickack": false, 00:14:29.802 "enable_placement_id": 0, 00:14:29.802 "enable_zerocopy_send_server": true, 00:14:29.802 "enable_zerocopy_send_client": false, 00:14:29.802 "zerocopy_threshold": 0, 00:14:29.802 "tls_version": 0, 00:14:29.802 "enable_ktls": false 00:14:29.802 } 00:14:29.802 }, 00:14:29.802 { 00:14:29.802 "method": "sock_impl_set_options", 00:14:29.802 "params": { 00:14:29.802 "impl_name": "posix", 00:14:29.802 "recv_buf_size": 2097152, 00:14:29.802 "send_buf_size": 2097152, 00:14:29.802 "enable_recv_pipe": true, 00:14:29.802 "enable_quickack": false, 00:14:29.802 "enable_placement_id": 0, 00:14:29.802 "enable_zerocopy_send_server": true, 00:14:29.802 "enable_zerocopy_send_client": false, 00:14:29.802 "zerocopy_threshold": 0, 00:14:29.802 "tls_version": 0, 00:14:29.802 "enable_ktls": false 00:14:29.802 } 00:14:29.802 }, 00:14:29.802 { 00:14:29.802 "method": "sock_impl_set_options", 00:14:29.802 "params": { 00:14:29.802 "impl_name": "uring", 00:14:29.802 "recv_buf_size": 2097152, 00:14:29.802 "send_buf_size": 2097152, 00:14:29.803 "enable_recv_pipe": true, 00:14:29.803 "enable_quickack": false, 00:14:29.803 "enable_placement_id": 0, 00:14:29.803 "enable_zerocopy_send_server": false, 00:14:29.803 "enable_zerocopy_send_client": false, 00:14:29.803 "zerocopy_threshold": 0, 00:14:29.803 "tls_version": 0, 00:14:29.803 "enable_ktls": false 00:14:29.803 } 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "vmd", 00:14:29.803 "config": [] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "accel", 00:14:29.803 "config": [ 00:14:29.803 { 00:14:29.803 "method": "accel_set_options", 00:14:29.803 "params": { 00:14:29.803 "small_cache_size": 128, 00:14:29.803 "large_cache_size": 16, 00:14:29.803 "task_count": 2048, 00:14:29.803 "sequence_count": 2048, 00:14:29.803 "buf_count": 2048 00:14:29.803 } 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "bdev", 00:14:29.803 "config": [ 00:14:29.803 { 00:14:29.803 "method": "bdev_set_options", 00:14:29.803 "params": { 00:14:29.803 "bdev_io_pool_size": 65535, 00:14:29.803 "bdev_io_cache_size": 256, 00:14:29.803 "bdev_auto_examine": true, 00:14:29.803 "iobuf_small_cache_size": 128, 00:14:29.803 "iobuf_large_cache_size": 16 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_raid_set_options", 00:14:29.803 "params": { 00:14:29.803 "process_window_size_kb": 1024, 00:14:29.803 "process_max_bandwidth_mb_sec": 0 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_iscsi_set_options", 00:14:29.803 "params": { 00:14:29.803 "timeout_sec": 30 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_nvme_set_options", 00:14:29.803 "params": { 00:14:29.803 "action_on_timeout": "none", 00:14:29.803 "timeout_us": 0, 00:14:29.803 "timeout_admin_us": 0, 00:14:29.803 "keep_alive_timeout_ms": 10000, 00:14:29.803 "arbitration_burst": 0, 00:14:29.803 "low_priority_weight": 0, 00:14:29.803 "medium_priority_weight": 0, 00:14:29.803 "high_priority_weight": 0, 00:14:29.803 "nvme_adminq_poll_period_us": 10000, 00:14:29.803 "nvme_ioq_poll_period_us": 0, 00:14:29.803 "io_queue_requests": 0, 00:14:29.803 "delay_cmd_submit": true, 00:14:29.803 "transport_retry_count": 4, 00:14:29.803 "bdev_retry_count": 3, 00:14:29.803 "transport_ack_timeout": 0, 00:14:29.803 "ctrlr_loss_timeout_sec": 0, 00:14:29.803 "reconnect_delay_sec": 0, 00:14:29.803 "fast_io_fail_timeout_sec": 0, 00:14:29.803 "disable_auto_failback": false, 00:14:29.803 "generate_uuids": false, 00:14:29.803 "transport_tos": 0, 00:14:29.803 "nvme_error_stat": false, 00:14:29.803 "rdma_srq_size": 0, 00:14:29.803 "io_path_stat": false, 00:14:29.803 "allow_accel_sequence": false, 00:14:29.803 "rdma_max_cq_size": 0, 00:14:29.803 "rdma_cm_event_timeout_ms": 0, 00:14:29.803 "dhchap_digests": [ 00:14:29.803 "sha256", 00:14:29.803 "sha384", 00:14:29.803 "sha512" 00:14:29.803 ], 00:14:29.803 "dhchap_dhgroups": [ 00:14:29.803 "null", 00:14:29.803 "ffdhe2048", 00:14:29.803 "ffdhe3072", 00:14:29.803 "ffdhe4096", 00:14:29.803 "ffdhe6144", 00:14:29.803 "ffdhe8192" 00:14:29.803 ] 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_nvme_set_hotplug", 00:14:29.803 "params": { 00:14:29.803 "period_us": 100000, 00:14:29.803 "enable": false 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_malloc_create", 00:14:29.803 "params": { 00:14:29.803 "name": "malloc0", 00:14:29.803 "num_blocks": 8192, 00:14:29.803 "block_size": 4096, 00:14:29.803 "physical_block_size": 4096, 00:14:29.803 "uuid": "60a00339-5136-427d-91e5-ef9c608a43da", 00:14:29.803 "optimal_io_boundary": 0, 00:14:29.803 "md_size": 0, 00:14:29.803 "dif_type": 0, 00:14:29.803 "dif_is_head_of_md": false, 00:14:29.803 "dif_pi_format": 0 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "bdev_wait_for_examine" 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "nbd", 00:14:29.803 "config": [] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "scheduler", 00:14:29.803 "config": [ 00:14:29.803 { 00:14:29.803 "method": "framework_set_scheduler", 00:14:29.803 "params": { 00:14:29.803 "name": "static" 00:14:29.803 } 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "subsystem": "nvmf", 00:14:29.803 "config": [ 00:14:29.803 { 00:14:29.803 "method": "nvmf_set_config", 00:14:29.803 "params": { 00:14:29.803 "discovery_filter": "match_any", 00:14:29.803 "admin_cmd_passthru": { 00:14:29.803 "identify_ctrlr": false 00:14:29.803 }, 00:14:29.803 "dhchap_digests": [ 00:14:29.803 "sha256", 00:14:29.803 "sha384", 00:14:29.803 "sha512" 00:14:29.803 ], 00:14:29.803 "dhchap_dhgroups": [ 00:14:29.803 "null", 00:14:29.803 "ffdhe2048", 00:14:29.803 "ffdhe3072", 00:14:29.803 "ffdhe4096", 00:14:29.803 "ffdhe6144", 00:14:29.803 "ffdhe8192" 00:14:29.803 ] 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_set_max_subsystems", 00:14:29.803 "params": { 00:14:29.803 "max_subsystems": 1024 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_set_crdt", 00:14:29.803 "params": { 00:14:29.803 "crdt1": 0, 00:14:29.803 "crdt2": 0, 00:14:29.803 "crdt3": 0 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_create_transport", 00:14:29.803 "params": { 00:14:29.803 "trtype": "TCP", 00:14:29.803 "max_queue_depth": 128, 00:14:29.803 "max_io_qpairs_per_ctrlr": 127, 00:14:29.803 "in_capsule_data_size": 4096, 00:14:29.803 "max_io_size": 131072, 00:14:29.803 "io_unit_size": 131072, 00:14:29.803 "max_aq_depth": 128, 00:14:29.803 "num_shared_buffers": 511, 00:14:29.803 "buf_cache_size": 4294967295, 00:14:29.803 "dif_insert_or_strip": false, 00:14:29.803 "zcopy": false, 00:14:29.803 "c2h_success": false, 00:14:29.803 "sock_priority": 0, 00:14:29.803 "abort_timeout_sec": 1, 00:14:29.803 "ack_timeout": 0, 00:14:29.803 "data_wr_pool_size": 0 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_create_subsystem", 00:14:29.803 "params": { 00:14:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.803 "allow_any_host": false, 00:14:29.803 "serial_number": "00000000000000000000", 00:14:29.803 "model_number": "SPDK bdev Controller", 00:14:29.803 "max_namespaces": 32, 00:14:29.803 "min_cntlid": 1, 00:14:29.803 "max_cntlid": 65519, 00:14:29.803 "ana_reporting": false 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_subsystem_add_host", 00:14:29.803 "params": { 00:14:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.803 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.803 "psk": "key0" 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_subsystem_add_ns", 00:14:29.803 "params": { 00:14:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.803 "namespace": { 00:14:29.803 "nsid": 1, 00:14:29.803 "bdev_name": "malloc0", 00:14:29.803 "nguid": "60A003395136427D91E5EF9C608A43DA", 00:14:29.803 "uuid": "60a00339-5136-427d-91e5-ef9c608a43da", 00:14:29.803 "no_auto_visible": false 00:14:29.803 } 00:14:29.803 } 00:14:29.803 }, 00:14:29.803 { 00:14:29.803 "method": "nvmf_subsystem_add_listener", 00:14:29.803 "params": { 00:14:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.803 "listen_address": { 00:14:29.803 "trtype": "TCP", 00:14:29.803 "adrfam": "IPv4", 00:14:29.803 "traddr": "10.0.0.3", 00:14:29.803 "trsvcid": "4420" 00:14:29.803 }, 00:14:29.803 "secure_channel": false, 00:14:29.803 "sock_impl": "ssl" 00:14:29.803 } 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 } 00:14:29.803 ] 00:14:29.803 }' 00:14:29.803 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:30.063 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:30.063 "subsystems": [ 00:14:30.063 { 00:14:30.063 "subsystem": "keyring", 00:14:30.063 "config": [ 00:14:30.063 { 00:14:30.063 "method": "keyring_file_add_key", 00:14:30.063 "params": { 00:14:30.063 "name": "key0", 00:14:30.063 "path": "/tmp/tmp.7sN9koHjlU" 00:14:30.063 } 00:14:30.063 } 00:14:30.063 ] 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "subsystem": "iobuf", 00:14:30.063 "config": [ 00:14:30.063 { 00:14:30.063 "method": "iobuf_set_options", 00:14:30.063 "params": { 00:14:30.063 "small_pool_count": 8192, 00:14:30.063 "large_pool_count": 1024, 00:14:30.063 "small_bufsize": 8192, 00:14:30.063 "large_bufsize": 135168, 00:14:30.063 "enable_numa": false 00:14:30.063 } 00:14:30.063 } 00:14:30.063 ] 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "subsystem": "sock", 00:14:30.063 "config": [ 00:14:30.063 { 00:14:30.063 "method": "sock_set_default_impl", 00:14:30.063 "params": { 00:14:30.063 "impl_name": "uring" 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "sock_impl_set_options", 00:14:30.063 "params": { 00:14:30.063 "impl_name": "ssl", 00:14:30.063 "recv_buf_size": 4096, 00:14:30.063 "send_buf_size": 4096, 00:14:30.063 "enable_recv_pipe": true, 00:14:30.063 "enable_quickack": false, 00:14:30.063 "enable_placement_id": 0, 00:14:30.063 "enable_zerocopy_send_server": true, 00:14:30.063 "enable_zerocopy_send_client": false, 00:14:30.063 "zerocopy_threshold": 0, 00:14:30.063 "tls_version": 0, 00:14:30.063 "enable_ktls": false 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "sock_impl_set_options", 00:14:30.063 "params": { 00:14:30.063 "impl_name": "posix", 00:14:30.063 "recv_buf_size": 2097152, 00:14:30.063 "send_buf_size": 2097152, 00:14:30.063 "enable_recv_pipe": true, 00:14:30.063 "enable_quickack": false, 00:14:30.063 "enable_placement_id": 0, 00:14:30.063 "enable_zerocopy_send_server": true, 00:14:30.063 "enable_zerocopy_send_client": false, 00:14:30.063 "zerocopy_threshold": 0, 00:14:30.063 "tls_version": 0, 00:14:30.063 "enable_ktls": false 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "sock_impl_set_options", 00:14:30.063 "params": { 00:14:30.063 "impl_name": "uring", 00:14:30.063 "recv_buf_size": 2097152, 00:14:30.063 "send_buf_size": 2097152, 00:14:30.063 "enable_recv_pipe": true, 00:14:30.063 "enable_quickack": false, 00:14:30.063 "enable_placement_id": 0, 00:14:30.063 "enable_zerocopy_send_server": false, 00:14:30.063 "enable_zerocopy_send_client": false, 00:14:30.063 "zerocopy_threshold": 0, 00:14:30.063 "tls_version": 0, 00:14:30.063 "enable_ktls": false 00:14:30.063 } 00:14:30.063 } 00:14:30.063 ] 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "subsystem": "vmd", 00:14:30.063 "config": [] 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "subsystem": "accel", 00:14:30.063 "config": [ 00:14:30.063 { 00:14:30.063 "method": "accel_set_options", 00:14:30.063 "params": { 00:14:30.063 "small_cache_size": 128, 00:14:30.063 "large_cache_size": 16, 00:14:30.063 "task_count": 2048, 00:14:30.063 "sequence_count": 2048, 00:14:30.063 "buf_count": 2048 00:14:30.063 } 00:14:30.063 } 00:14:30.063 ] 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "subsystem": "bdev", 00:14:30.063 "config": [ 00:14:30.063 { 00:14:30.063 "method": "bdev_set_options", 00:14:30.063 "params": { 00:14:30.063 "bdev_io_pool_size": 65535, 00:14:30.063 "bdev_io_cache_size": 256, 00:14:30.063 "bdev_auto_examine": true, 00:14:30.063 "iobuf_small_cache_size": 128, 00:14:30.063 "iobuf_large_cache_size": 16 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_raid_set_options", 00:14:30.063 "params": { 00:14:30.063 "process_window_size_kb": 1024, 00:14:30.063 "process_max_bandwidth_mb_sec": 0 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_iscsi_set_options", 00:14:30.063 "params": { 00:14:30.063 "timeout_sec": 30 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_nvme_set_options", 00:14:30.063 "params": { 00:14:30.063 "action_on_timeout": "none", 00:14:30.063 "timeout_us": 0, 00:14:30.063 "timeout_admin_us": 0, 00:14:30.063 "keep_alive_timeout_ms": 10000, 00:14:30.063 "arbitration_burst": 0, 00:14:30.063 "low_priority_weight": 0, 00:14:30.063 "medium_priority_weight": 0, 00:14:30.063 "high_priority_weight": 0, 00:14:30.063 "nvme_adminq_poll_period_us": 10000, 00:14:30.063 "nvme_ioq_poll_period_us": 0, 00:14:30.063 "io_queue_requests": 512, 00:14:30.063 "delay_cmd_submit": true, 00:14:30.063 "transport_retry_count": 4, 00:14:30.063 "bdev_retry_count": 3, 00:14:30.063 "transport_ack_timeout": 0, 00:14:30.063 "ctrlr_loss_timeout_sec": 0, 00:14:30.063 "reconnect_delay_sec": 0, 00:14:30.063 "fast_io_fail_timeout_sec": 0, 00:14:30.063 "disable_auto_failback": false, 00:14:30.063 "generate_uuids": false, 00:14:30.063 "transport_tos": 0, 00:14:30.063 "nvme_error_stat": false, 00:14:30.063 "rdma_srq_size": 0, 00:14:30.063 "io_path_stat": false, 00:14:30.063 "allow_accel_sequence": false, 00:14:30.063 "rdma_max_cq_size": 0, 00:14:30.063 "rdma_cm_event_timeout_ms": 0, 00:14:30.063 "dhchap_digests": [ 00:14:30.063 "sha256", 00:14:30.063 "sha384", 00:14:30.063 "sha512" 00:14:30.063 ], 00:14:30.063 "dhchap_dhgroups": [ 00:14:30.063 "null", 00:14:30.063 "ffdhe2048", 00:14:30.063 "ffdhe3072", 00:14:30.063 "ffdhe4096", 00:14:30.063 "ffdhe6144", 00:14:30.063 "ffdhe8192" 00:14:30.063 ] 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_nvme_attach_controller", 00:14:30.063 "params": { 00:14:30.063 "name": "nvme0", 00:14:30.063 "trtype": "TCP", 00:14:30.063 "adrfam": "IPv4", 00:14:30.063 "traddr": "10.0.0.3", 00:14:30.063 "trsvcid": "4420", 00:14:30.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.063 "prchk_reftag": false, 00:14:30.063 "prchk_guard": false, 00:14:30.063 "ctrlr_loss_timeout_sec": 0, 00:14:30.063 "reconnect_delay_sec": 0, 00:14:30.063 "fast_io_fail_timeout_sec": 0, 00:14:30.063 "psk": "key0", 00:14:30.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.063 "hdgst": false, 00:14:30.063 "ddgst": false, 00:14:30.063 "multipath": "multipath" 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_nvme_set_hotplug", 00:14:30.063 "params": { 00:14:30.063 "period_us": 100000, 00:14:30.063 "enable": false 00:14:30.063 } 00:14:30.063 }, 00:14:30.063 { 00:14:30.063 "method": "bdev_enable_histogram", 00:14:30.063 "params": { 00:14:30.063 "name": "nvme0n1", 00:14:30.064 "enable": true 00:14:30.064 } 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "method": "bdev_wait_for_examine" 00:14:30.064 } 00:14:30.064 ] 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "subsystem": "nbd", 00:14:30.064 "config": [] 00:14:30.064 } 00:14:30.064 ] 00:14:30.064 }' 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 73102 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73102 ']' 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73102 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.064 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73102 00:14:30.323 killing process with pid 73102 00:14:30.323 Received shutdown signal, test time was about 1.000000 seconds 00:14:30.323 00:14:30.323 Latency(us) 00:14:30.323 [2024-12-10T14:19:55.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.323 [2024-12-10T14:19:55.160Z] =================================================================================================================== 00:14:30.323 [2024-12-10T14:19:55.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.323 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:30.323 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:30.323 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73102' 00:14:30.323 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73102 00:14:30.323 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73102 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 73083 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73083 ']' 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73083 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73083 00:14:30.323 killing process with pid 73083 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73083' 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73083 00:14:30.323 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73083 00:14:30.582 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:30.582 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.582 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.582 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:30.582 "subsystems": [ 00:14:30.582 { 00:14:30.582 "subsystem": "keyring", 00:14:30.582 "config": [ 00:14:30.582 { 00:14:30.582 "method": "keyring_file_add_key", 00:14:30.582 "params": { 00:14:30.582 "name": "key0", 00:14:30.582 "path": "/tmp/tmp.7sN9koHjlU" 00:14:30.582 } 00:14:30.582 } 00:14:30.582 ] 00:14:30.582 }, 00:14:30.582 { 00:14:30.582 "subsystem": "iobuf", 00:14:30.582 "config": [ 00:14:30.582 { 00:14:30.582 "method": "iobuf_set_options", 00:14:30.582 "params": { 00:14:30.582 "small_pool_count": 8192, 00:14:30.582 "large_pool_count": 1024, 00:14:30.582 "small_bufsize": 8192, 00:14:30.582 "large_bufsize": 135168, 00:14:30.582 "enable_numa": false 00:14:30.582 } 00:14:30.582 } 00:14:30.582 ] 00:14:30.582 }, 00:14:30.582 { 00:14:30.582 "subsystem": "sock", 00:14:30.582 "config": [ 00:14:30.582 { 00:14:30.582 "method": "sock_set_default_impl", 00:14:30.582 "params": { 00:14:30.582 "impl_name": "uring" 00:14:30.582 } 00:14:30.582 }, 00:14:30.582 { 00:14:30.582 "method": "sock_impl_set_options", 00:14:30.582 "params": { 00:14:30.582 "impl_name": "ssl", 00:14:30.582 "recv_buf_size": 4096, 00:14:30.582 "send_buf_size": 4096, 00:14:30.582 "enable_recv_pipe": true, 00:14:30.582 "enable_quickack": false, 00:14:30.582 "enable_placement_id": 0, 00:14:30.582 "enable_zerocopy_send_server": true, 00:14:30.582 "enable_zerocopy_send_client": false, 00:14:30.582 "zerocopy_threshold": 0, 00:14:30.582 "tls_version": 0, 00:14:30.582 "enable_ktls": false 00:14:30.582 } 00:14:30.582 }, 00:14:30.582 { 00:14:30.582 "method": "sock_impl_set_options", 00:14:30.582 "params": { 00:14:30.582 "impl_name": "posix", 00:14:30.582 "recv_buf_size": 2097152, 00:14:30.582 "send_buf_size": 2097152, 00:14:30.582 "enable_recv_pipe": true, 00:14:30.582 "enable_quickack": false, 00:14:30.582 "enable_placement_id": 0, 00:14:30.582 "enable_zerocopy_send_server": true, 00:14:30.582 "enable_zerocopy_send_client": false, 00:14:30.582 "zerocopy_threshold": 0, 00:14:30.582 "tls_version": 0, 00:14:30.582 "enable_ktls": false 00:14:30.582 } 00:14:30.582 }, 00:14:30.582 { 00:14:30.582 "method": "sock_impl_set_options", 00:14:30.582 "params": { 00:14:30.582 "impl_name": "uring", 00:14:30.582 "recv_buf_size": 2097152, 00:14:30.582 "send_buf_size": 2097152, 00:14:30.582 "enable_recv_pipe": true, 00:14:30.583 "enable_quickack": false, 00:14:30.583 "enable_placement_id": 0, 00:14:30.583 "enable_zerocopy_send_server": false, 00:14:30.583 "enable_zerocopy_send_client": false, 00:14:30.583 "zerocopy_threshold": 0, 00:14:30.583 "tls_version": 0, 00:14:30.583 "enable_ktls": false 00:14:30.583 } 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "vmd", 00:14:30.583 "config": [] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "accel", 00:14:30.583 "config": [ 00:14:30.583 { 00:14:30.583 "method": "accel_set_options", 00:14:30.583 "params": { 00:14:30.583 "small_cache_size": 128, 00:14:30.583 "large_cache_size": 16, 00:14:30.583 "task_count": 2048, 00:14:30.583 "sequence_count": 2048, 00:14:30.583 "buf_count": 2048 00:14:30.583 } 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "bdev", 00:14:30.583 "config": [ 00:14:30.583 { 00:14:30.583 "method": "bdev_set_options", 00:14:30.583 "params": { 00:14:30.583 "bdev_io_pool_size": 65535, 00:14:30.583 "bdev_io_cache_size": 256, 00:14:30.583 "bdev_auto_examine": true, 00:14:30.583 "iobuf_small_cache_size": 128, 00:14:30.583 "iobuf_large_cache_size": 16 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_raid_set_options", 00:14:30.583 "params": { 00:14:30.583 "process_window_size_kb": 1024, 00:14:30.583 "process_max_bandwidth_mb_sec": 0 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_iscsi_set_options", 00:14:30.583 "params": { 00:14:30.583 "timeout_sec": 30 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_nvme_set_options", 00:14:30.583 "params": { 00:14:30.583 "action_on_timeout": "none", 00:14:30.583 "timeout_us": 0, 00:14:30.583 "timeout_admin_us": 0, 00:14:30.583 "keep_alive_timeout_ms": 10000, 00:14:30.583 "arbitration_burst": 0, 00:14:30.583 "low_priority_weight": 0, 00:14:30.583 "medium_priority_weight": 0, 00:14:30.583 "high_priority_weight": 0, 00:14:30.583 "nvme_adminq_poll_period_us": 10000, 00:14:30.583 "nvme_ioq_poll_period_us": 0, 00:14:30.583 "io_queue_requests": 0, 00:14:30.583 "delay_cmd_submit": true, 00:14:30.583 "transport_retry_count": 4, 00:14:30.583 "bdev_retry_count": 3, 00:14:30.583 "transport_ack_timeout": 0, 00:14:30.583 "ctrlr_loss_timeout_sec": 0, 00:14:30.583 "reconnect_delay_sec": 0, 00:14:30.583 "fast_io_fail_timeout_sec": 0, 00:14:30.583 "disable_auto_failback": false, 00:14:30.583 "generate_uuids": false, 00:14:30.583 "transport_tos": 0, 00:14:30.583 "nvme_error_stat": false, 00:14:30.583 "rdma_srq_size": 0, 00:14:30.583 "io_path_stat": false, 00:14:30.583 "allow_accel_sequence": false, 00:14:30.583 "rdma_max_cq_size": 0, 00:14:30.583 "rdma_cm_event_timeout_ms": 0, 00:14:30.583 "dhchap_digests": [ 00:14:30.583 "sha256", 00:14:30.583 "sha384", 00:14:30.583 "sha512" 00:14:30.583 ], 00:14:30.583 "dhchap_dhgroups": [ 00:14:30.583 "null", 00:14:30.583 "ffdhe2048", 00:14:30.583 "ffdhe3072", 00:14:30.583 "ffdhe4096", 00:14:30.583 "ffdhe6144", 00:14:30.583 "ffdhe8192" 00:14:30.583 ] 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_nvme_set_hotplug", 00:14:30.583 "params": { 00:14:30.583 "period_us": 100000, 00:14:30.583 "enable": false 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_malloc_create", 00:14:30.583 "params": { 00:14:30.583 "name": "malloc0", 00:14:30.583 "num_blocks": 8192, 00:14:30.583 "block_size": 4096, 00:14:30.583 "physical_block_size": 4096, 00:14:30.583 "uuid": "60a00339-5136-427d-91e5-ef9c608a43da", 00:14:30.583 "optimal_io_boundary": 0, 00:14:30.583 "md_size": 0, 00:14:30.583 "dif_type": 0, 00:14:30.583 "dif_is_head_of_md": false, 00:14:30.583 "dif_pi_format": 0 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "bdev_wait_for_examine" 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "nbd", 00:14:30.583 "config": [] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "scheduler", 00:14:30.583 "config": [ 00:14:30.583 { 00:14:30.583 "method": "framework_set_scheduler", 00:14:30.583 "params": { 00:14:30.583 "name": "static" 00:14:30.583 } 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "subsystem": "nvmf", 00:14:30.583 "config": [ 00:14:30.583 { 00:14:30.583 "method": "nvmf_set_config", 00:14:30.583 "params": { 00:14:30.583 "discovery_filter": "match_any", 00:14:30.583 "admin_cmd_passthru": { 00:14:30.583 "identify_ctrlr": false 00:14:30.583 }, 00:14:30.583 "dhchap_digests": [ 00:14:30.583 "sha256", 00:14:30.583 "sha384", 00:14:30.583 "sha512" 00:14:30.583 ], 00:14:30.583 "dhchap_dhgroups": [ 00:14:30.583 "null", 00:14:30.583 "ffdhe2048", 00:14:30.583 "ffdhe3072", 00:14:30.583 "ffdhe4096", 00:14:30.583 "ffdhe6144", 00:14:30.583 "ffdhe8192" 00:14:30.583 ] 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_set_max_subsystems", 00:14:30.583 "params": { 00:14:30.583 "max_subsystems": 1024 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_set_crdt", 00:14:30.583 "params": { 00:14:30.583 "crdt1": 0, 00:14:30.583 "crdt2": 0, 00:14:30.583 "crdt3": 0 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_create_transport", 00:14:30.583 "params": { 00:14:30.583 "trtype": "TCP", 00:14:30.583 "max_queue_depth": 128, 00:14:30.583 "max_io_qpairs_per_ctrlr": 127, 00:14:30.583 "in_capsule_data_size": 4096, 00:14:30.583 "max_io_size": 131072, 00:14:30.583 "io_unit_size": 131072, 00:14:30.583 "max_aq_depth": 128, 00:14:30.583 "num_shared_buffers": 511, 00:14:30.583 "buf_cache_size": 4294967295, 00:14:30.583 "dif_insert_or_strip": false, 00:14:30.583 "zcopy": false, 00:14:30.583 "c2h_success": false, 00:14:30.583 "sock_priority": 0, 00:14:30.583 "abort_timeout_sec": 1, 00:14:30.583 "ack_timeout": 0, 00:14:30.583 "data_wr_pool_size": 0 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_create_subsystem", 00:14:30.583 "params": { 00:14:30.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.583 "allow_any_host": false, 00:14:30.583 "serial_number": "00000000000000000000", 00:14:30.583 "model_number": "SPDK bdev Controller", 00:14:30.583 "max_namespaces": 32, 00:14:30.583 "min_cntlid": 1, 00:14:30.583 "max_cntlid": 65519, 00:14:30.583 "ana_reporting": false 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_subsystem_add_host", 00:14:30.583 "params": { 00:14:30.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.583 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.583 "psk": "key0" 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_subsystem_add_ns", 00:14:30.583 "params": { 00:14:30.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.583 "namespace": { 00:14:30.583 "nsid": 1, 00:14:30.583 "bdev_name": "malloc0", 00:14:30.583 "nguid": "60A003395136427D91E5EF9C608A43DA", 00:14:30.583 "uuid": "60a00339-5136-427d-91e5-ef9c608a43da", 00:14:30.583 "no_auto_visible": false 00:14:30.583 } 00:14:30.583 } 00:14:30.583 }, 00:14:30.583 { 00:14:30.583 "method": "nvmf_subsystem_add_listener", 00:14:30.583 "params": { 00:14:30.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.583 "listen_address": { 00:14:30.583 "trtype": "TCP", 00:14:30.583 "adrfam": "IPv4", 00:14:30.583 "traddr": "10.0.0.3", 00:14:30.583 "trsvcid": "4420" 00:14:30.583 }, 00:14:30.583 "secure_channel": false, 00:14:30.583 "sock_impl": "ssl" 00:14:30.583 } 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 } 00:14:30.583 ] 00:14:30.583 }' 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73154 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73154 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73154 ']' 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.583 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.583 [2024-12-10 14:19:55.276517] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:30.583 [2024-12-10 14:19:55.276860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.843 [2024-12-10 14:19:55.425380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.843 [2024-12-10 14:19:55.453573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.843 [2024-12-10 14:19:55.453620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.843 [2024-12-10 14:19:55.453646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.843 [2024-12-10 14:19:55.453653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.843 [2024-12-10 14:19:55.453659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.843 [2024-12-10 14:19:55.453946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.843 [2024-12-10 14:19:55.594458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.843 [2024-12-10 14:19:55.651409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.102 [2024-12-10 14:19:55.683322] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.102 [2024-12-10 14:19:55.683552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=73186 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 73186 /var/tmp/bdevperf.sock 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73186 ']' 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:31.670 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:31.670 "subsystems": [ 00:14:31.670 { 00:14:31.670 "subsystem": "keyring", 00:14:31.670 "config": [ 00:14:31.670 { 00:14:31.670 "method": "keyring_file_add_key", 00:14:31.670 "params": { 00:14:31.670 "name": "key0", 00:14:31.670 "path": "/tmp/tmp.7sN9koHjlU" 00:14:31.670 } 00:14:31.670 } 00:14:31.670 ] 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "subsystem": "iobuf", 00:14:31.670 "config": [ 00:14:31.670 { 00:14:31.670 "method": "iobuf_set_options", 00:14:31.670 "params": { 00:14:31.670 "small_pool_count": 8192, 00:14:31.670 "large_pool_count": 1024, 00:14:31.670 "small_bufsize": 8192, 00:14:31.670 "large_bufsize": 135168, 00:14:31.670 "enable_numa": false 00:14:31.670 } 00:14:31.670 } 00:14:31.670 ] 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "subsystem": "sock", 00:14:31.670 "config": [ 00:14:31.670 { 00:14:31.670 "method": "sock_set_default_impl", 00:14:31.670 "params": { 00:14:31.670 "impl_name": "uring" 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "sock_impl_set_options", 00:14:31.670 "params": { 00:14:31.670 "impl_name": "ssl", 00:14:31.670 "recv_buf_size": 4096, 00:14:31.670 "send_buf_size": 4096, 00:14:31.670 "enable_recv_pipe": true, 00:14:31.670 "enable_quickack": false, 00:14:31.670 "enable_placement_id": 0, 00:14:31.670 "enable_zerocopy_send_server": true, 00:14:31.670 "enable_zerocopy_send_client": false, 00:14:31.670 "zerocopy_threshold": 0, 00:14:31.670 "tls_version": 0, 00:14:31.670 "enable_ktls": false 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "sock_impl_set_options", 00:14:31.670 "params": { 00:14:31.670 "impl_name": "posix", 00:14:31.670 "recv_buf_size": 2097152, 00:14:31.670 "send_buf_size": 2097152, 00:14:31.670 "enable_recv_pipe": true, 00:14:31.670 "enable_quickack": false, 00:14:31.670 "enable_placement_id": 0, 00:14:31.670 "enable_zerocopy_send_server": true, 00:14:31.670 "enable_zerocopy_send_client": false, 00:14:31.670 "zerocopy_threshold": 0, 00:14:31.670 "tls_version": 0, 00:14:31.670 "enable_ktls": false 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "sock_impl_set_options", 00:14:31.670 "params": { 00:14:31.670 "impl_name": "uring", 00:14:31.670 "recv_buf_size": 2097152, 00:14:31.670 "send_buf_size": 2097152, 00:14:31.670 "enable_recv_pipe": true, 00:14:31.670 "enable_quickack": false, 00:14:31.670 "enable_placement_id": 0, 00:14:31.670 "enable_zerocopy_send_server": false, 00:14:31.670 "enable_zerocopy_send_client": false, 00:14:31.670 "zerocopy_threshold": 0, 00:14:31.670 "tls_version": 0, 00:14:31.670 "enable_ktls": false 00:14:31.670 } 00:14:31.670 } 00:14:31.670 ] 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "subsystem": "vmd", 00:14:31.670 "config": [] 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "subsystem": "accel", 00:14:31.670 "config": [ 00:14:31.670 { 00:14:31.670 "method": "accel_set_options", 00:14:31.670 "params": { 00:14:31.670 "small_cache_size": 128, 00:14:31.670 "large_cache_size": 16, 00:14:31.670 "task_count": 2048, 00:14:31.670 "sequence_count": 2048, 00:14:31.670 "buf_count": 2048 00:14:31.670 } 00:14:31.670 } 00:14:31.670 ] 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "subsystem": "bdev", 00:14:31.670 "config": [ 00:14:31.670 { 00:14:31.670 "method": "bdev_set_options", 00:14:31.670 "params": { 00:14:31.670 "bdev_io_pool_size": 65535, 00:14:31.670 "bdev_io_cache_size": 256, 00:14:31.670 "bdev_auto_examine": true, 00:14:31.670 "iobuf_small_cache_size": 128, 00:14:31.670 "iobuf_large_cache_size": 16 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "bdev_raid_set_options", 00:14:31.670 "params": { 00:14:31.670 "process_window_size_kb": 1024, 00:14:31.670 "process_max_bandwidth_mb_sec": 0 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "bdev_iscsi_set_options", 00:14:31.670 "params": { 00:14:31.670 "timeout_sec": 30 00:14:31.670 } 00:14:31.670 }, 00:14:31.670 { 00:14:31.670 "method": "bdev_nvme_set_options", 00:14:31.670 "params": { 00:14:31.670 "action_on_timeout": "none", 00:14:31.670 "timeout_us": 0, 00:14:31.670 "timeout_admin_us": 0, 00:14:31.670 "keep_alive_timeout_ms": 10000, 00:14:31.670 "arbitration_burst": 0, 00:14:31.670 "low_priority_weight": 0, 00:14:31.670 "medium_priority_weight": 0, 00:14:31.670 "high_priority_weight": 0, 00:14:31.670 "nvme_adminq_poll_period_us": 10000, 00:14:31.670 "nvme_ioq_poll_period_us": 0, 00:14:31.670 "io_queue_requests": 512, 00:14:31.670 "delay_cmd_submit": true, 00:14:31.670 "transport_retry_count": 4, 00:14:31.670 "bdev_retry_count": 3, 00:14:31.670 "transport_ack_timeout": 0, 00:14:31.670 "ctrlr_loss_timeout_sec": 0, 00:14:31.670 "reconnect_delay_sec": 0, 00:14:31.670 "fast_io_fail_timeout_sec": 0, 00:14:31.670 "disable_auto_failback": false, 00:14:31.670 "generate_uuids": false, 00:14:31.670 "transport_tos": 0, 00:14:31.671 "nvme_error_stat": false, 00:14:31.671 "rdma_srq_size": 0, 00:14:31.671 "io_path_stat": false, 00:14:31.671 "allow_accel_sequence": false, 00:14:31.671 "rdma_max_cq_size": 0, 00:14:31.671 "rdma_cm_event_timeout_ms": 0, 00:14:31.671 "dhchap_digests": [ 00:14:31.671 "sha256", 00:14:31.671 "sha384", 00:14:31.671 "sha512" 00:14:31.671 ], 00:14:31.671 "dhchap_dhgroups": [ 00:14:31.671 "null", 00:14:31.671 "ffdhe2048", 00:14:31.671 "ffdhe3072", 00:14:31.671 "ffdhe4096", 00:14:31.671 "ffdhe6144", 00:14:31.671 "ffdhe8192" 00:14:31.671 ] 00:14:31.671 } 00:14:31.671 }, 00:14:31.671 { 00:14:31.671 "method": "bdev_nvme_attach_controller", 00:14:31.671 "params": { 00:14:31.671 "name": "nvme0", 00:14:31.671 "trtype": "TCP", 00:14:31.671 "adrfam": "IPv4", 00:14:31.671 "traddr": "10.0.0.3", 00:14:31.671 "trsvcid": "4420", 00:14:31.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.671 "prchk_reftag": false, 00:14:31.671 "prchk_guard": false, 00:14:31.671 "ctrlr_loss_timeout_sec": 0, 00:14:31.671 "reconnect_delay_sec": 0, 00:14:31.671 "fast_io_fail_timeout_sec": 0, 00:14:31.671 "psk": "key0", 00:14:31.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.671 "hdgst": false, 00:14:31.671 "ddgst": false, 00:14:31.671 "multipath": "multipath" 00:14:31.671 } 00:14:31.671 }, 00:14:31.671 { 00:14:31.671 "method": "bdev_nvme_set_hotplug", 00:14:31.671 "params": { 00:14:31.671 "period_us": 100000, 00:14:31.671 "enable": false 00:14:31.671 } 00:14:31.671 }, 00:14:31.671 { 00:14:31.671 "method": "bdev_enable_histogram", 00:14:31.671 "params": { 00:14:31.671 "name": "nvme0n1", 00:14:31.671 "enable": true 00:14:31.671 } 00:14:31.671 }, 00:14:31.671 { 00:14:31.671 "method": "bdev_wait_for_examine" 00:14:31.671 } 00:14:31.671 ] 00:14:31.671 }, 00:14:31.671 { 00:14:31.671 "subsystem": "nbd", 00:14:31.671 "config": [] 00:14:31.671 } 00:14:31.671 ] 00:14:31.671 }' 00:14:31.671 [2024-12-10 14:19:56.322321] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:31.671 [2024-12-10 14:19:56.322414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73186 ] 00:14:31.671 [2024-12-10 14:19:56.475082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.939 [2024-12-10 14:19:56.515072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.939 [2024-12-10 14:19:56.627029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.939 [2024-12-10 14:19:56.658303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.524 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.524 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:32.524 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:32.524 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:32.783 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.783 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.043 Running I/O for 1 seconds... 00:14:33.980 4411.00 IOPS, 17.23 MiB/s 00:14:33.980 Latency(us) 00:14:33.980 [2024-12-10T14:19:58.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.980 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.980 Verification LBA range: start 0x0 length 0x2000 00:14:33.980 nvme0n1 : 1.02 4443.94 17.36 0.00 0.00 28427.65 5808.87 19899.11 00:14:33.980 [2024-12-10T14:19:58.817Z] =================================================================================================================== 00:14:33.980 [2024-12-10T14:19:58.817Z] Total : 4443.94 17.36 0.00 0.00 28427.65 5808.87 19899.11 00:14:33.980 { 00:14:33.980 "results": [ 00:14:33.980 { 00:14:33.980 "job": "nvme0n1", 00:14:33.980 "core_mask": "0x2", 00:14:33.980 "workload": "verify", 00:14:33.980 "status": "finished", 00:14:33.980 "verify_range": { 00:14:33.980 "start": 0, 00:14:33.980 "length": 8192 00:14:33.980 }, 00:14:33.980 "queue_depth": 128, 00:14:33.980 "io_size": 4096, 00:14:33.980 "runtime": 1.021391, 00:14:33.980 "iops": 4443.939686173072, 00:14:33.980 "mibps": 17.359139399113563, 00:14:33.980 "io_failed": 0, 00:14:33.980 "io_timeout": 0, 00:14:33.980 "avg_latency_us": 28427.646105469765, 00:14:33.980 "min_latency_us": 5808.872727272727, 00:14:33.980 "max_latency_us": 19899.112727272728 00:14:33.980 } 00:14:33.980 ], 00:14:33.980 "core_count": 1 00:14:33.980 } 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:33.980 nvmf_trace.0 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:33.980 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73186 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73186 ']' 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73186 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73186 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:33.981 killing process with pid 73186 00:14:33.981 Received shutdown signal, test time was about 1.000000 seconds 00:14:33.981 00:14:33.981 Latency(us) 00:14:33.981 [2024-12-10T14:19:58.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.981 [2024-12-10T14:19:58.818Z] =================================================================================================================== 00:14:33.981 [2024-12-10T14:19:58.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73186' 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73186 00:14:33.981 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73186 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.240 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.240 rmmod nvme_tcp 00:14:34.240 rmmod nvme_fabrics 00:14:34.240 rmmod nvme_keyring 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 73154 ']' 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 73154 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73154 ']' 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73154 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.240 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73154 00:14:34.500 killing process with pid 73154 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73154' 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73154 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73154 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:34.500 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pJZPmYXBtt /tmp/tmp.tiq5lJV5wl /tmp/tmp.7sN9koHjlU 00:14:34.759 00:14:34.759 real 1m19.985s 00:14:34.759 user 2m9.053s 00:14:34.759 sys 0m26.161s 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.759 ************************************ 00:14:34.759 END TEST nvmf_tls 00:14:34.759 ************************************ 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.759 ************************************ 00:14:34.759 START TEST nvmf_fips 00:14:34.759 ************************************ 00:14:34.759 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:35.019 * Looking for test storage... 00:14:35.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.019 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:35.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.020 --rc genhtml_branch_coverage=1 00:14:35.020 --rc genhtml_function_coverage=1 00:14:35.020 --rc genhtml_legend=1 00:14:35.020 --rc geninfo_all_blocks=1 00:14:35.020 --rc geninfo_unexecuted_blocks=1 00:14:35.020 00:14:35.020 ' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:35.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.020 --rc genhtml_branch_coverage=1 00:14:35.020 --rc genhtml_function_coverage=1 00:14:35.020 --rc genhtml_legend=1 00:14:35.020 --rc geninfo_all_blocks=1 00:14:35.020 --rc geninfo_unexecuted_blocks=1 00:14:35.020 00:14:35.020 ' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:35.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.020 --rc genhtml_branch_coverage=1 00:14:35.020 --rc genhtml_function_coverage=1 00:14:35.020 --rc genhtml_legend=1 00:14:35.020 --rc geninfo_all_blocks=1 00:14:35.020 --rc geninfo_unexecuted_blocks=1 00:14:35.020 00:14:35.020 ' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:35.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.020 --rc genhtml_branch_coverage=1 00:14:35.020 --rc genhtml_function_coverage=1 00:14:35.020 --rc genhtml_legend=1 00:14:35.020 --rc geninfo_all_blocks=1 00:14:35.020 --rc geninfo_unexecuted_blocks=1 00:14:35.020 00:14:35.020 ' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:35.020 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.021 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:35.280 Error setting digest 00:14:35.280 40D265B9417F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:35.280 40D265B9417F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.280 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:35.281 Cannot find device "nvmf_init_br" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:35.281 Cannot find device "nvmf_init_br2" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:35.281 Cannot find device "nvmf_tgt_br" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.281 Cannot find device "nvmf_tgt_br2" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:35.281 Cannot find device "nvmf_init_br" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:35.281 Cannot find device "nvmf_init_br2" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:35.281 Cannot find device "nvmf_tgt_br" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:35.281 Cannot find device "nvmf_tgt_br2" 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:35.281 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:35.281 Cannot find device "nvmf_br" 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:35.281 Cannot find device "nvmf_init_if" 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:35.281 Cannot find device "nvmf_init_if2" 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.281 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.540 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:14:35.541 00:14:35.541 --- 10.0.0.3 ping statistics --- 00:14:35.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.541 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.541 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.541 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:14:35.541 00:14:35.541 --- 10.0.0.4 ping statistics --- 00:14:35.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.541 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:35.541 00:14:35.541 --- 10.0.0.1 ping statistics --- 00:14:35.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.541 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:35.541 00:14:35.541 --- 10.0.0.2 ping statistics --- 00:14:35.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.541 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73501 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73501 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73501 ']' 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.541 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.800 [2024-12-10 14:20:00.456410] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:35.800 [2024-12-10 14:20:00.456509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.800 [2024-12-10 14:20:00.610373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.060 [2024-12-10 14:20:00.648077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.060 [2024-12-10 14:20:00.648153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.060 [2024-12-10 14:20:00.648166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.060 [2024-12-10 14:20:00.648176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.060 [2024-12-10 14:20:00.648185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.060 [2024-12-10 14:20:00.648561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.060 [2024-12-10 14:20:00.682804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1vp 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1vp 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1vp 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1vp 00:14:36.060 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.319 [2024-12-10 14:20:01.073852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.319 [2024-12-10 14:20:01.089803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:36.319 [2024-12-10 14:20:01.090100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:36.319 malloc0 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73535 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73535 /var/tmp/bdevperf.sock 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73535 ']' 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.319 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.579 [2024-12-10 14:20:01.234375] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:36.579 [2024-12-10 14:20:01.234471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73535 ] 00:14:36.579 [2024-12-10 14:20:01.383250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.837 [2024-12-10 14:20:01.422479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.837 [2024-12-10 14:20:01.455488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.837 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.837 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:36.837 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1vp 00:14:37.096 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:37.356 [2024-12-10 14:20:01.994447] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.356 TLSTESTn1 00:14:37.356 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.615 Running I/O for 10 seconds... 00:14:39.489 4192.00 IOPS, 16.38 MiB/s [2024-12-10T14:20:05.263Z] 4320.50 IOPS, 16.88 MiB/s [2024-12-10T14:20:06.641Z] 4362.67 IOPS, 17.04 MiB/s [2024-12-10T14:20:07.579Z] 4371.50 IOPS, 17.08 MiB/s [2024-12-10T14:20:08.528Z] 4234.00 IOPS, 16.54 MiB/s [2024-12-10T14:20:09.464Z] 4149.50 IOPS, 16.21 MiB/s [2024-12-10T14:20:10.401Z] 4188.14 IOPS, 16.36 MiB/s [2024-12-10T14:20:11.338Z] 4178.00 IOPS, 16.32 MiB/s [2024-12-10T14:20:12.275Z] 4208.78 IOPS, 16.44 MiB/s [2024-12-10T14:20:12.275Z] 4235.50 IOPS, 16.54 MiB/s 00:14:47.438 Latency(us) 00:14:47.438 [2024-12-10T14:20:12.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.438 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:47.438 Verification LBA range: start 0x0 length 0x2000 00:14:47.438 TLSTESTn1 : 10.01 4241.79 16.57 0.00 0.00 30119.97 4617.31 25618.62 00:14:47.438 [2024-12-10T14:20:12.275Z] =================================================================================================================== 00:14:47.438 [2024-12-10T14:20:12.275Z] Total : 4241.79 16.57 0.00 0.00 30119.97 4617.31 25618.62 00:14:47.438 { 00:14:47.438 "results": [ 00:14:47.438 { 00:14:47.438 "job": "TLSTESTn1", 00:14:47.438 "core_mask": "0x4", 00:14:47.438 "workload": "verify", 00:14:47.438 "status": "finished", 00:14:47.438 "verify_range": { 00:14:47.438 "start": 0, 00:14:47.438 "length": 8192 00:14:47.438 }, 00:14:47.438 "queue_depth": 128, 00:14:47.438 "io_size": 4096, 00:14:47.439 "runtime": 10.014638, 00:14:47.439 "iops": 4241.790866529574, 00:14:47.439 "mibps": 16.569495572381147, 00:14:47.439 "io_failed": 0, 00:14:47.439 "io_timeout": 0, 00:14:47.439 "avg_latency_us": 30119.96650436569, 00:14:47.439 "min_latency_us": 4617.309090909091, 00:14:47.439 "max_latency_us": 25618.618181818183 00:14:47.439 } 00:14:47.439 ], 00:14:47.439 "core_count": 1 00:14:47.439 } 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:47.439 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.698 nvmf_trace.0 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73535 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73535 ']' 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73535 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73535 00:14:47.698 killing process with pid 73535 00:14:47.698 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.698 00:14:47.698 Latency(us) 00:14:47.698 [2024-12-10T14:20:12.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.698 [2024-12-10T14:20:12.535Z] =================================================================================================================== 00:14:47.698 [2024-12-10T14:20:12.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73535' 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73535 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73535 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.698 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.957 rmmod nvme_tcp 00:14:47.957 rmmod nvme_fabrics 00:14:47.957 rmmod nvme_keyring 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73501 ']' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73501 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73501 ']' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73501 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73501 00:14:47.957 killing process with pid 73501 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73501' 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73501 00:14:47.957 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73501 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.216 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1vp 00:14:48.216 00:14:48.216 real 0m13.503s 00:14:48.216 user 0m18.284s 00:14:48.216 sys 0m5.604s 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.216 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:48.216 ************************************ 00:14:48.216 END TEST nvmf_fips 00:14:48.216 ************************************ 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.476 ************************************ 00:14:48.476 START TEST nvmf_control_msg_list 00:14:48.476 ************************************ 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:48.476 * Looking for test storage... 00:14:48.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:48.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.476 --rc genhtml_branch_coverage=1 00:14:48.476 --rc genhtml_function_coverage=1 00:14:48.476 --rc genhtml_legend=1 00:14:48.476 --rc geninfo_all_blocks=1 00:14:48.476 --rc geninfo_unexecuted_blocks=1 00:14:48.476 00:14:48.476 ' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:48.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.476 --rc genhtml_branch_coverage=1 00:14:48.476 --rc genhtml_function_coverage=1 00:14:48.476 --rc genhtml_legend=1 00:14:48.476 --rc geninfo_all_blocks=1 00:14:48.476 --rc geninfo_unexecuted_blocks=1 00:14:48.476 00:14:48.476 ' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:48.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.476 --rc genhtml_branch_coverage=1 00:14:48.476 --rc genhtml_function_coverage=1 00:14:48.476 --rc genhtml_legend=1 00:14:48.476 --rc geninfo_all_blocks=1 00:14:48.476 --rc geninfo_unexecuted_blocks=1 00:14:48.476 00:14:48.476 ' 00:14:48.476 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:48.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.477 --rc genhtml_branch_coverage=1 00:14:48.477 --rc genhtml_function_coverage=1 00:14:48.477 --rc genhtml_legend=1 00:14:48.477 --rc geninfo_all_blocks=1 00:14:48.477 --rc geninfo_unexecuted_blocks=1 00:14:48.477 00:14:48.477 ' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.477 Cannot find device "nvmf_init_br" 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:48.477 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.737 Cannot find device "nvmf_init_br2" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.737 Cannot find device "nvmf_tgt_br" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.737 Cannot find device "nvmf_tgt_br2" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:48.737 Cannot find device "nvmf_init_br" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:48.737 Cannot find device "nvmf_init_br2" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:48.737 Cannot find device "nvmf_tgt_br" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:48.737 Cannot find device "nvmf_tgt_br2" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.737 Cannot find device "nvmf_br" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.737 Cannot find device "nvmf_init_if" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.737 Cannot find device "nvmf_init_if2" 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.737 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:49.017 00:14:49.017 --- 10.0.0.3 ping statistics --- 00:14:49.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.017 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:14:49.017 00:14:49.017 --- 10.0.0.4 ping statistics --- 00:14:49.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.017 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:49.017 00:14:49.017 --- 10.0.0.1 ping statistics --- 00:14:49.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.017 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:49.017 00:14:49.017 --- 10.0.0.2 ping statistics --- 00:14:49.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.017 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.017 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73910 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73910 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73910 ']' 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.018 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:49.018 [2024-12-10 14:20:13.758399] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:49.018 [2024-12-10 14:20:13.758496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.276 [2024-12-10 14:20:13.911401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.277 [2024-12-10 14:20:13.949455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.277 [2024-12-10 14:20:13.949522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.277 [2024-12-10 14:20:13.949536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.277 [2024-12-10 14:20:13.949546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.277 [2024-12-10 14:20:13.949555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.277 [2024-12-10 14:20:13.949980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.277 [2024-12-10 14:20:13.984401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 [2024-12-10 14:20:14.811648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 Malloc0 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.214 [2024-12-10 14:20:14.850418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73942 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73943 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73944 00:14:50.215 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73942 00:14:50.215 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.215 [2024-12-10 14:20:15.024704] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.215 [2024-12-10 14:20:15.045049] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.215 [2024-12-10 14:20:15.045435] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.600 Initializing NVMe Controllers 00:14:51.600 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:51.600 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:51.600 Initialization complete. Launching workers. 00:14:51.600 ======================================================== 00:14:51.600 Latency(us) 00:14:51.600 Device Information : IOPS MiB/s Average min max 00:14:51.600 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3581.00 13.99 278.88 125.82 654.59 00:14:51.600 ======================================================== 00:14:51.600 Total : 3581.00 13.99 278.88 125.82 654.59 00:14:51.600 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73943 00:14:51.600 Initializing NVMe Controllers 00:14:51.600 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:51.600 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:51.600 Initialization complete. Launching workers. 00:14:51.600 ======================================================== 00:14:51.600 Latency(us) 00:14:51.600 Device Information : IOPS MiB/s Average min max 00:14:51.600 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3586.00 14.01 278.52 159.08 435.85 00:14:51.600 ======================================================== 00:14:51.600 Total : 3586.00 14.01 278.52 159.08 435.85 00:14:51.600 00:14:51.600 Initializing NVMe Controllers 00:14:51.600 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:51.600 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:51.600 Initialization complete. Launching workers. 00:14:51.600 ======================================================== 00:14:51.600 Latency(us) 00:14:51.600 Device Information : IOPS MiB/s Average min max 00:14:51.600 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3579.00 13.98 279.01 176.96 674.58 00:14:51.600 ======================================================== 00:14:51.600 Total : 3579.00 13.98 279.01 176.96 674.58 00:14:51.600 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73944 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.600 rmmod nvme_tcp 00:14:51.600 rmmod nvme_fabrics 00:14:51.600 rmmod nvme_keyring 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73910 ']' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73910 ']' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.600 killing process with pid 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73910' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73910 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.600 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:51.860 00:14:51.860 real 0m3.492s 00:14:51.860 user 0m5.607s 00:14:51.860 sys 0m1.278s 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.860 ************************************ 00:14:51.860 END TEST nvmf_control_msg_list 00:14:51.860 ************************************ 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.860 ************************************ 00:14:51.860 START TEST nvmf_wait_for_buf 00:14:51.860 ************************************ 00:14:51.860 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:52.119 * Looking for test storage... 00:14:52.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.119 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.119 --rc genhtml_branch_coverage=1 00:14:52.119 --rc genhtml_function_coverage=1 00:14:52.119 --rc genhtml_legend=1 00:14:52.120 --rc geninfo_all_blocks=1 00:14:52.120 --rc geninfo_unexecuted_blocks=1 00:14:52.120 00:14:52.120 ' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.120 --rc genhtml_branch_coverage=1 00:14:52.120 --rc genhtml_function_coverage=1 00:14:52.120 --rc genhtml_legend=1 00:14:52.120 --rc geninfo_all_blocks=1 00:14:52.120 --rc geninfo_unexecuted_blocks=1 00:14:52.120 00:14:52.120 ' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.120 --rc genhtml_branch_coverage=1 00:14:52.120 --rc genhtml_function_coverage=1 00:14:52.120 --rc genhtml_legend=1 00:14:52.120 --rc geninfo_all_blocks=1 00:14:52.120 --rc geninfo_unexecuted_blocks=1 00:14:52.120 00:14:52.120 ' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.120 --rc genhtml_branch_coverage=1 00:14:52.120 --rc genhtml_function_coverage=1 00:14:52.120 --rc genhtml_legend=1 00:14:52.120 --rc geninfo_all_blocks=1 00:14:52.120 --rc geninfo_unexecuted_blocks=1 00:14:52.120 00:14:52.120 ' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.120 Cannot find device "nvmf_init_br" 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.120 Cannot find device "nvmf_init_br2" 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.120 Cannot find device "nvmf_tgt_br" 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.120 Cannot find device "nvmf_tgt_br2" 00:14:52.120 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.121 Cannot find device "nvmf_init_br" 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.121 Cannot find device "nvmf_init_br2" 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.121 Cannot find device "nvmf_tgt_br" 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:52.121 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.380 Cannot find device "nvmf_tgt_br2" 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.380 Cannot find device "nvmf_br" 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.380 Cannot find device "nvmf_init_if" 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.380 Cannot find device "nvmf_init_if2" 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:52.380 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.380 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.639 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:52.639 00:14:52.639 --- 10.0.0.3 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.639 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:14:52.639 00:14:52.639 --- 10.0.0.4 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:52.639 00:14:52.639 --- 10.0.0.1 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:52.639 00:14:52.639 --- 10.0.0.2 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=74180 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 74180 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 74180 ']' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.639 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:52.639 [2024-12-10 14:20:17.363251] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:52.639 [2024-12-10 14:20:17.363351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.898 [2024-12-10 14:20:17.502501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.898 [2024-12-10 14:20:17.531769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.898 [2024-12-10 14:20:17.531834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.898 [2024-12-10 14:20:17.531844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.898 [2024-12-10 14:20:17.531852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.898 [2024-12-10 14:20:17.531858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.898 [2024-12-10 14:20:17.532173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 [2024-12-10 14:20:18.427148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 Malloc0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 [2024-12-10 14:20:18.470344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 [2024-12-10 14:20:18.494447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.859 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:54.118 [2024-12-10 14:20:18.696206] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:55.496 Initializing NVMe Controllers 00:14:55.496 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:55.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:55.496 Initialization complete. Launching workers. 00:14:55.496 ======================================================== 00:14:55.496 Latency(us) 00:14:55.496 Device Information : IOPS MiB/s Average min max 00:14:55.496 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.75 7212.58 8295.44 00:14:55.496 ======================================================== 00:14:55.496 Total : 504.00 63.00 7992.75 7212.58 8295.44 00:14:55.496 00:14:55.496 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:55.496 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.496 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:55.496 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.496 rmmod nvme_tcp 00:14:55.496 rmmod nvme_fabrics 00:14:55.496 rmmod nvme_keyring 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 74180 ']' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 74180 ']' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.496 killing process with pid 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74180' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 74180 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:55.496 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:55.756 00:14:55.756 real 0m3.928s 00:14:55.756 user 0m3.460s 00:14:55.756 sys 0m0.794s 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.756 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:55.756 ************************************ 00:14:55.756 END TEST nvmf_wait_for_buf 00:14:55.756 ************************************ 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.015 14:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.015 ************************************ 00:14:56.015 START TEST nvmf_nsid 00:14:56.015 ************************************ 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:56.016 * Looking for test storage... 00:14:56.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.016 --rc genhtml_branch_coverage=1 00:14:56.016 --rc genhtml_function_coverage=1 00:14:56.016 --rc genhtml_legend=1 00:14:56.016 --rc geninfo_all_blocks=1 00:14:56.016 --rc geninfo_unexecuted_blocks=1 00:14:56.016 00:14:56.016 ' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.016 --rc genhtml_branch_coverage=1 00:14:56.016 --rc genhtml_function_coverage=1 00:14:56.016 --rc genhtml_legend=1 00:14:56.016 --rc geninfo_all_blocks=1 00:14:56.016 --rc geninfo_unexecuted_blocks=1 00:14:56.016 00:14:56.016 ' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.016 --rc genhtml_branch_coverage=1 00:14:56.016 --rc genhtml_function_coverage=1 00:14:56.016 --rc genhtml_legend=1 00:14:56.016 --rc geninfo_all_blocks=1 00:14:56.016 --rc geninfo_unexecuted_blocks=1 00:14:56.016 00:14:56.016 ' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.016 --rc genhtml_branch_coverage=1 00:14:56.016 --rc genhtml_function_coverage=1 00:14:56.016 --rc genhtml_legend=1 00:14:56.016 --rc geninfo_all_blocks=1 00:14:56.016 --rc geninfo_unexecuted_blocks=1 00:14:56.016 00:14:56.016 ' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.016 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:56.017 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:56.276 Cannot find device "nvmf_init_br" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:56.276 Cannot find device "nvmf_init_br2" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:56.276 Cannot find device "nvmf_tgt_br" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.276 Cannot find device "nvmf_tgt_br2" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:56.276 Cannot find device "nvmf_init_br" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:56.276 Cannot find device "nvmf_init_br2" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:56.276 Cannot find device "nvmf_tgt_br" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:56.276 Cannot find device "nvmf_tgt_br2" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:56.276 Cannot find device "nvmf_br" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:56.276 Cannot find device "nvmf_init_if" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:56.276 Cannot find device "nvmf_init_if2" 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:56.276 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:56.276 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:56.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:14:56.535 00:14:56.535 --- 10.0.0.3 ping statistics --- 00:14:56.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.535 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:56.535 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:56.535 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:56.535 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:56.535 00:14:56.535 --- 10.0.0.4 ping statistics --- 00:14:56.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.536 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:56.536 00:14:56.536 --- 10.0.0.1 ping statistics --- 00:14:56.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.536 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:56.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:56.536 00:14:56.536 --- 10.0.0.2 ping statistics --- 00:14:56.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.536 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74448 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74448 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74448 ']' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.536 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:56.536 [2024-12-10 14:20:21.358020] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:56.536 [2024-12-10 14:20:21.358127] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.795 [2024-12-10 14:20:21.508223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.795 [2024-12-10 14:20:21.541875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.795 [2024-12-10 14:20:21.541933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.795 [2024-12-10 14:20:21.541946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.795 [2024-12-10 14:20:21.541972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.795 [2024-12-10 14:20:21.541981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.795 [2024-12-10 14:20:21.542322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.795 [2024-12-10 14:20:21.574368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.795 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.795 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:56.795 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.795 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:56.795 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74471 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:57.054 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c202cd51-0367-4311-9d91-ecafe4466832 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=57e4c07b-877b-4d63-8409-be3683661ba3 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=866977cc-424c-4e2d-b789-7092f0619131 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.055 null0 00:14:57.055 null1 00:14:57.055 null2 00:14:57.055 [2024-12-10 14:20:21.727538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.055 [2024-12-10 14:20:21.736799] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:14:57.055 [2024-12-10 14:20:21.736896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74471 ] 00:14:57.055 [2024-12-10 14:20:21.751655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74471 /var/tmp/tgt2.sock 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74471 ']' 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.055 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.055 [2024-12-10 14:20:21.886833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.314 [2024-12-10 14:20:21.926705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.314 [2024-12-10 14:20:21.971911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.314 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.314 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:57.314 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:57.882 [2024-12-10 14:20:22.523658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.882 [2024-12-10 14:20:22.539735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:57.882 nvme0n1 nvme0n2 00:14:57.882 nvme1n1 00:14:57.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:57.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:57.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:58.141 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:58.142 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:58.142 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c202cd51-0367-4311-9d91-ecafe4466832 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c202cd51036743119d91ecafe4466832 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C202CD51036743119D91ECAFE4466832 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C202CD51036743119D91ECAFE4466832 == \C\2\0\2\C\D\5\1\0\3\6\7\4\3\1\1\9\D\9\1\E\C\A\F\E\4\4\6\6\8\3\2 ]] 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 57e4c07b-877b-4d63-8409-be3683661ba3 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:59.079 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=57e4c07b877b4d638409be3683661ba3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 57E4C07B877B4D638409BE3683661BA3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 57E4C07B877B4D638409BE3683661BA3 == \5\7\E\4\C\0\7\B\8\7\7\B\4\D\6\3\8\4\0\9\B\E\3\6\8\3\6\6\1\B\A\3 ]] 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 866977cc-424c-4e2d-b789-7092f0619131 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:59.337 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.337 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=866977cc424c4e2db7897092f0619131 00:14:59.337 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 866977CC424C4E2DB7897092F0619131 00:14:59.337 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 866977CC424C4E2DB7897092F0619131 == \8\6\6\9\7\7\C\C\4\2\4\C\4\E\2\D\B\7\8\9\7\0\9\2\F\0\6\1\9\1\3\1 ]] 00:14:59.337 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74471 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74471 ']' 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74471 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74471 00:14:59.596 killing process with pid 74471 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74471' 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74471 00:14:59.596 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74471 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.855 rmmod nvme_tcp 00:14:59.855 rmmod nvme_fabrics 00:14:59.855 rmmod nvme_keyring 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74448 ']' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74448 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74448 ']' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74448 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74448 00:14:59.855 killing process with pid 74448 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74448' 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74448 00:14:59.855 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74448 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.115 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.374 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:00.374 00:15:00.374 real 0m4.347s 00:15:00.374 user 0m6.481s 00:15:00.374 sys 0m1.548s 00:15:00.374 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.374 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:00.374 ************************************ 00:15:00.374 END TEST nvmf_nsid 00:15:00.374 ************************************ 00:15:00.374 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:00.374 ************************************ 00:15:00.374 END TEST nvmf_target_extra 00:15:00.374 ************************************ 00:15:00.374 00:15:00.374 real 4m57.936s 00:15:00.374 user 10m27.786s 00:15:00.374 sys 1m4.738s 00:15:00.374 14:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.374 14:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.374 14:20:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:00.374 14:20:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.374 14:20:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.374 14:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.374 ************************************ 00:15:00.374 START TEST nvmf_host 00:15:00.374 ************************************ 00:15:00.374 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:00.374 * Looking for test storage... 00:15:00.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:00.374 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.374 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.374 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.633 --rc genhtml_branch_coverage=1 00:15:00.633 --rc genhtml_function_coverage=1 00:15:00.633 --rc genhtml_legend=1 00:15:00.633 --rc geninfo_all_blocks=1 00:15:00.633 --rc geninfo_unexecuted_blocks=1 00:15:00.633 00:15:00.633 ' 00:15:00.633 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.633 --rc genhtml_branch_coverage=1 00:15:00.633 --rc genhtml_function_coverage=1 00:15:00.633 --rc genhtml_legend=1 00:15:00.634 --rc geninfo_all_blocks=1 00:15:00.634 --rc geninfo_unexecuted_blocks=1 00:15:00.634 00:15:00.634 ' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.634 --rc genhtml_branch_coverage=1 00:15:00.634 --rc genhtml_function_coverage=1 00:15:00.634 --rc genhtml_legend=1 00:15:00.634 --rc geninfo_all_blocks=1 00:15:00.634 --rc geninfo_unexecuted_blocks=1 00:15:00.634 00:15:00.634 ' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.634 --rc genhtml_branch_coverage=1 00:15:00.634 --rc genhtml_function_coverage=1 00:15:00.634 --rc genhtml_legend=1 00:15:00.634 --rc geninfo_all_blocks=1 00:15:00.634 --rc geninfo_unexecuted_blocks=1 00:15:00.634 00:15:00.634 ' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:00.634 ************************************ 00:15:00.634 START TEST nvmf_identify 00:15:00.634 ************************************ 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:00.634 * Looking for test storage... 00:15:00.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.634 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.894 --rc genhtml_branch_coverage=1 00:15:00.894 --rc genhtml_function_coverage=1 00:15:00.894 --rc genhtml_legend=1 00:15:00.894 --rc geninfo_all_blocks=1 00:15:00.894 --rc geninfo_unexecuted_blocks=1 00:15:00.894 00:15:00.894 ' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.894 --rc genhtml_branch_coverage=1 00:15:00.894 --rc genhtml_function_coverage=1 00:15:00.894 --rc genhtml_legend=1 00:15:00.894 --rc geninfo_all_blocks=1 00:15:00.894 --rc geninfo_unexecuted_blocks=1 00:15:00.894 00:15:00.894 ' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.894 --rc genhtml_branch_coverage=1 00:15:00.894 --rc genhtml_function_coverage=1 00:15:00.894 --rc genhtml_legend=1 00:15:00.894 --rc geninfo_all_blocks=1 00:15:00.894 --rc geninfo_unexecuted_blocks=1 00:15:00.894 00:15:00.894 ' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.894 --rc genhtml_branch_coverage=1 00:15:00.894 --rc genhtml_function_coverage=1 00:15:00.894 --rc genhtml_legend=1 00:15:00.894 --rc geninfo_all_blocks=1 00:15:00.894 --rc geninfo_unexecuted_blocks=1 00:15:00.894 00:15:00.894 ' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.894 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.895 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:00.895 Cannot find device "nvmf_init_br" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:00.895 Cannot find device "nvmf_init_br2" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:00.895 Cannot find device "nvmf_tgt_br" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.895 Cannot find device "nvmf_tgt_br2" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:00.895 Cannot find device "nvmf_init_br" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:00.895 Cannot find device "nvmf_init_br2" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:00.895 Cannot find device "nvmf_tgt_br" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:00.895 Cannot find device "nvmf_tgt_br2" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:00.895 Cannot find device "nvmf_br" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:00.895 Cannot find device "nvmf_init_if" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:00.895 Cannot find device "nvmf_init_if2" 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.895 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:01.154 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:01.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:01.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:01.155 00:15:01.155 --- 10.0.0.3 ping statistics --- 00:15:01.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.155 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:01.155 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:01.155 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:01.155 00:15:01.155 --- 10.0.0.4 ping statistics --- 00:15:01.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.155 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:01.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:01.155 00:15:01.155 --- 10.0.0.1 ping statistics --- 00:15:01.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.155 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:01.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:15:01.155 00:15:01.155 --- 10.0.0.2 ping statistics --- 00:15:01.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.155 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74819 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74819 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74819 ']' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.155 14:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.414 [2024-12-10 14:20:26.034700] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:01.414 [2024-12-10 14:20:26.035038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.414 [2024-12-10 14:20:26.179715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.414 [2024-12-10 14:20:26.214505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.414 [2024-12-10 14:20:26.214736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.414 [2024-12-10 14:20:26.215179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.414 [2024-12-10 14:20:26.215442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.414 [2024-12-10 14:20:26.215711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.414 [2024-12-10 14:20:26.216681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.414 [2024-12-10 14:20:26.216809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.414 [2024-12-10 14:20:26.216883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.414 [2024-12-10 14:20:26.216883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.414 [2024-12-10 14:20:26.249638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 [2024-12-10 14:20:26.308695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 Malloc0 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 [2024-12-10 14:20:26.415274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 [ 00:15:01.673 { 00:15:01.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.673 "subtype": "Discovery", 00:15:01.673 "listen_addresses": [ 00:15:01.673 { 00:15:01.673 "trtype": "TCP", 00:15:01.673 "adrfam": "IPv4", 00:15:01.673 "traddr": "10.0.0.3", 00:15:01.673 "trsvcid": "4420" 00:15:01.673 } 00:15:01.673 ], 00:15:01.673 "allow_any_host": true, 00:15:01.673 "hosts": [] 00:15:01.673 }, 00:15:01.673 { 00:15:01.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.673 "subtype": "NVMe", 00:15:01.673 "listen_addresses": [ 00:15:01.673 { 00:15:01.673 "trtype": "TCP", 00:15:01.673 "adrfam": "IPv4", 00:15:01.673 "traddr": "10.0.0.3", 00:15:01.673 "trsvcid": "4420" 00:15:01.673 } 00:15:01.673 ], 00:15:01.673 "allow_any_host": true, 00:15:01.673 "hosts": [], 00:15:01.673 "serial_number": "SPDK00000000000001", 00:15:01.673 "model_number": "SPDK bdev Controller", 00:15:01.673 "max_namespaces": 32, 00:15:01.673 "min_cntlid": 1, 00:15:01.673 "max_cntlid": 65519, 00:15:01.673 "namespaces": [ 00:15:01.673 { 00:15:01.673 "nsid": 1, 00:15:01.673 "bdev_name": "Malloc0", 00:15:01.673 "name": "Malloc0", 00:15:01.673 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:01.673 "eui64": "ABCDEF0123456789", 00:15:01.673 "uuid": "72e0f0ee-235c-46b7-9b54-e3aca3a5b20c" 00:15:01.673 } 00:15:01.673 ] 00:15:01.673 } 00:15:01.673 ] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.673 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:01.673 [2024-12-10 14:20:26.475569] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:01.673 [2024-12-10 14:20:26.475781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74852 ] 00:15:01.935 [2024-12-10 14:20:26.631808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:01.935 [2024-12-10 14:20:26.631892] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:01.935 [2024-12-10 14:20:26.631899] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:01.935 [2024-12-10 14:20:26.631915] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:01.935 [2024-12-10 14:20:26.631927] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:01.935 [2024-12-10 14:20:26.636331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:01.935 [2024-12-10 14:20:26.636429] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cd1750 0 00:15:01.935 [2024-12-10 14:20:26.652062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:01.935 [2024-12-10 14:20:26.652088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:01.935 [2024-12-10 14:20:26.652128] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:01.935 [2024-12-10 14:20:26.652133] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:01.935 [2024-12-10 14:20:26.652169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.652177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.652182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.652197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:01.935 [2024-12-10 14:20:26.652231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660107] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:01.935 [2024-12-10 14:20:26.660116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:01.935 [2024-12-10 14:20:26.660123] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:01.935 [2024-12-10 14:20:26.660140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:01.935 [2024-12-10 14:20:26.660277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:01.935 [2024-12-10 14:20:26.660285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:01.935 [2024-12-10 14:20:26.660432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660670] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:01.935 [2024-12-10 14:20:26.660676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660795] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:01.935 [2024-12-10 14:20:26.660802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.660891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.660898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.660902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.660912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.935 [2024-12-10 14:20:26.660922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.660931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.935 [2024-12-10 14:20:26.660938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.935 [2024-12-10 14:20:26.660956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.935 [2024-12-10 14:20:26.661034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.935 [2024-12-10 14:20:26.661044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.935 [2024-12-10 14:20:26.661048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.661052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.935 [2024-12-10 14:20:26.661057] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.935 [2024-12-10 14:20:26.661063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:01.935 [2024-12-10 14:20:26.661072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:01.935 [2024-12-10 14:20:26.661083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.935 [2024-12-10 14:20:26.661094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.935 [2024-12-10 14:20:26.661098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.936 [2024-12-10 14:20:26.661128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.936 [2024-12-10 14:20:26.661218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.936 [2024-12-10 14:20:26.661226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.936 [2024-12-10 14:20:26.661230] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661234] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cd1750): datao=0, datal=4096, cccid=0 00:15:01.936 [2024-12-10 14:20:26.661239] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d35740) on tqpair(0x1cd1750): expected_datao=0, payload_size=4096 00:15:01.936 [2024-12-10 14:20:26.661244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661253] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661257] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.936 [2024-12-10 14:20:26.661273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.936 [2024-12-10 14:20:26.661277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.936 [2024-12-10 14:20:26.661291] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:01.936 [2024-12-10 14:20:26.661296] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:01.936 [2024-12-10 14:20:26.661301] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:01.936 [2024-12-10 14:20:26.661307] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:01.936 [2024-12-10 14:20:26.661312] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:01.936 [2024-12-10 14:20:26.661318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:01.936 [2024-12-10 14:20:26.661341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.936 [2024-12-10 14:20:26.661349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:01.936 [2024-12-10 14:20:26.661386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.936 [2024-12-10 14:20:26.661437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.936 [2024-12-10 14:20:26.661444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.936 [2024-12-10 14:20:26.661448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.936 [2024-12-10 14:20:26.661468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.936 [2024-12-10 14:20:26.661490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.936 [2024-12-10 14:20:26.661510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.936 [2024-12-10 14:20:26.661530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.936 [2024-12-10 14:20:26.661549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.936 [2024-12-10 14:20:26.661558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.936 [2024-12-10 14:20:26.661565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.936 [2024-12-10 14:20:26.661598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35740, cid 0, qid 0 00:15:01.936 [2024-12-10 14:20:26.661606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d358c0, cid 1, qid 0 00:15:01.936 [2024-12-10 14:20:26.661611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35a40, cid 2, qid 0 00:15:01.936 [2024-12-10 14:20:26.661616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.936 [2024-12-10 14:20:26.661621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35d40, cid 4, qid 0 00:15:01.936 [2024-12-10 14:20:26.661695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.936 [2024-12-10 14:20:26.661702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.936 [2024-12-10 14:20:26.661706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35d40) on tqpair=0x1cd1750 00:15:01.936 [2024-12-10 14:20:26.661716] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:01.936 [2024-12-10 14:20:26.661726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:01.936 [2024-12-10 14:20:26.661738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.661751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.936 [2024-12-10 14:20:26.661770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35d40, cid 4, qid 0 00:15:01.936 [2024-12-10 14:20:26.661827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.936 [2024-12-10 14:20:26.661834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.936 [2024-12-10 14:20:26.661838] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661842] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cd1750): datao=0, datal=4096, cccid=4 00:15:01.936 [2024-12-10 14:20:26.661847] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d35d40) on tqpair(0x1cd1750): expected_datao=0, payload_size=4096 00:15:01.936 [2024-12-10 14:20:26.661851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661859] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.936 [2024-12-10 14:20:26.661878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.936 [2024-12-10 14:20:26.661881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35d40) on tqpair=0x1cd1750 00:15:01.936 [2024-12-10 14:20:26.661900] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:01.936 [2024-12-10 14:20:26.661987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.661999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.662008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.936 [2024-12-10 14:20:26.662016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cd1750) 00:15:01.936 [2024-12-10 14:20:26.662031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.936 [2024-12-10 14:20:26.662062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35d40, cid 4, qid 0 00:15:01.936 [2024-12-10 14:20:26.662071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35ec0, cid 5, qid 0 00:15:01.936 [2024-12-10 14:20:26.662193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.936 [2024-12-10 14:20:26.662201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.936 [2024-12-10 14:20:26.662205] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662209] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cd1750): datao=0, datal=1024, cccid=4 00:15:01.936 [2024-12-10 14:20:26.662213] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d35d40) on tqpair(0x1cd1750): expected_datao=0, payload_size=1024 00:15:01.936 [2024-12-10 14:20:26.662218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662226] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.936 [2024-12-10 14:20:26.662242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.936 [2024-12-10 14:20:26.662246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.936 [2024-12-10 14:20:26.662251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35ec0) on tqpair=0x1cd1750 00:15:01.936 [2024-12-10 14:20:26.662271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.937 [2024-12-10 14:20:26.662279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.937 [2024-12-10 14:20:26.662283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35d40) on tqpair=0x1cd1750 00:15:01.937 [2024-12-10 14:20:26.662310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cd1750) 00:15:01.937 [2024-12-10 14:20:26.662324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.937 [2024-12-10 14:20:26.662365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35d40, cid 4, qid 0 00:15:01.937 [2024-12-10 14:20:26.662433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.937 [2024-12-10 14:20:26.662441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.937 [2024-12-10 14:20:26.662445] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662448] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cd1750): datao=0, datal=3072, cccid=4 00:15:01.937 [2024-12-10 14:20:26.662453] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d35d40) on tqpair(0x1cd1750): expected_datao=0, payload_size=3072 00:15:01.937 [2024-12-10 14:20:26.662458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662465] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662469] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.937 [2024-12-10 14:20:26.662484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.937 [2024-12-10 14:20:26.662487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35d40) on tqpair=0x1cd1750 00:15:01.937 [2024-12-10 14:20:26.662502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cd1750) 00:15:01.937 [2024-12-10 14:20:26.662514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.937 [2024-12-10 14:20:26.662538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35d40, cid 4, qid 0 00:15:01.937 [2024-12-10 14:20:26.662596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.937 [2024-12-10 14:20:26.662603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.937 [2024-12-10 14:20:26.662607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cd1750): datao=0, datal=8, cccid=4 00:15:01.937 [2024-12-10 14:20:26.662616] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d35d40) on tqpair(0x1cd1750): expected_datao=0, payload_size=8 00:15:01.937 [2024-12-10 14:20:26.662620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662631] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.937 [2024-12-10 14:20:26.662655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.937 [2024-12-10 14:20:26.662659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.937 [2024-12-10 14:20:26.662663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35d40) on tqpair=0x1cd1750 00:15:01.937 ===================================================== 00:15:01.937 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:01.937 ===================================================== 00:15:01.937 Controller Capabilities/Features 00:15:01.937 ================================ 00:15:01.937 Vendor ID: 0000 00:15:01.937 Subsystem Vendor ID: 0000 00:15:01.937 Serial Number: .................... 00:15:01.937 Model Number: ........................................ 00:15:01.937 Firmware Version: 25.01 00:15:01.937 Recommended Arb Burst: 0 00:15:01.937 IEEE OUI Identifier: 00 00 00 00:15:01.937 Multi-path I/O 00:15:01.937 May have multiple subsystem ports: No 00:15:01.937 May have multiple controllers: No 00:15:01.937 Associated with SR-IOV VF: No 00:15:01.937 Max Data Transfer Size: 131072 00:15:01.937 Max Number of Namespaces: 0 00:15:01.937 Max Number of I/O Queues: 1024 00:15:01.937 NVMe Specification Version (VS): 1.3 00:15:01.937 NVMe Specification Version (Identify): 1.3 00:15:01.937 Maximum Queue Entries: 128 00:15:01.937 Contiguous Queues Required: Yes 00:15:01.937 Arbitration Mechanisms Supported 00:15:01.937 Weighted Round Robin: Not Supported 00:15:01.937 Vendor Specific: Not Supported 00:15:01.937 Reset Timeout: 15000 ms 00:15:01.937 Doorbell Stride: 4 bytes 00:15:01.937 NVM Subsystem Reset: Not Supported 00:15:01.937 Command Sets Supported 00:15:01.937 NVM Command Set: Supported 00:15:01.937 Boot Partition: Not Supported 00:15:01.937 Memory Page Size Minimum: 4096 bytes 00:15:01.937 Memory Page Size Maximum: 4096 bytes 00:15:01.937 Persistent Memory Region: Not Supported 00:15:01.937 Optional Asynchronous Events Supported 00:15:01.937 Namespace Attribute Notices: Not Supported 00:15:01.937 Firmware Activation Notices: Not Supported 00:15:01.937 ANA Change Notices: Not Supported 00:15:01.937 PLE Aggregate Log Change Notices: Not Supported 00:15:01.937 LBA Status Info Alert Notices: Not Supported 00:15:01.937 EGE Aggregate Log Change Notices: Not Supported 00:15:01.937 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.937 Zone Descriptor Change Notices: Not Supported 00:15:01.937 Discovery Log Change Notices: Supported 00:15:01.937 Controller Attributes 00:15:01.937 128-bit Host Identifier: Not Supported 00:15:01.937 Non-Operational Permissive Mode: Not Supported 00:15:01.937 NVM Sets: Not Supported 00:15:01.937 Read Recovery Levels: Not Supported 00:15:01.937 Endurance Groups: Not Supported 00:15:01.937 Predictable Latency Mode: Not Supported 00:15:01.937 Traffic Based Keep ALive: Not Supported 00:15:01.937 Namespace Granularity: Not Supported 00:15:01.937 SQ Associations: Not Supported 00:15:01.937 UUID List: Not Supported 00:15:01.937 Multi-Domain Subsystem: Not Supported 00:15:01.937 Fixed Capacity Management: Not Supported 00:15:01.937 Variable Capacity Management: Not Supported 00:15:01.937 Delete Endurance Group: Not Supported 00:15:01.937 Delete NVM Set: Not Supported 00:15:01.937 Extended LBA Formats Supported: Not Supported 00:15:01.937 Flexible Data Placement Supported: Not Supported 00:15:01.937 00:15:01.937 Controller Memory Buffer Support 00:15:01.937 ================================ 00:15:01.937 Supported: No 00:15:01.937 00:15:01.937 Persistent Memory Region Support 00:15:01.937 ================================ 00:15:01.937 Supported: No 00:15:01.937 00:15:01.937 Admin Command Set Attributes 00:15:01.937 ============================ 00:15:01.937 Security Send/Receive: Not Supported 00:15:01.937 Format NVM: Not Supported 00:15:01.937 Firmware Activate/Download: Not Supported 00:15:01.937 Namespace Management: Not Supported 00:15:01.937 Device Self-Test: Not Supported 00:15:01.937 Directives: Not Supported 00:15:01.937 NVMe-MI: Not Supported 00:15:01.937 Virtualization Management: Not Supported 00:15:01.937 Doorbell Buffer Config: Not Supported 00:15:01.937 Get LBA Status Capability: Not Supported 00:15:01.937 Command & Feature Lockdown Capability: Not Supported 00:15:01.937 Abort Command Limit: 1 00:15:01.937 Async Event Request Limit: 4 00:15:01.937 Number of Firmware Slots: N/A 00:15:01.937 Firmware Slot 1 Read-Only: N/A 00:15:01.937 Firmware Activation Without Reset: N/A 00:15:01.937 Multiple Update Detection Support: N/A 00:15:01.937 Firmware Update Granularity: No Information Provided 00:15:01.937 Per-Namespace SMART Log: No 00:15:01.937 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.937 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:01.937 Command Effects Log Page: Not Supported 00:15:01.937 Get Log Page Extended Data: Supported 00:15:01.937 Telemetry Log Pages: Not Supported 00:15:01.937 Persistent Event Log Pages: Not Supported 00:15:01.937 Supported Log Pages Log Page: May Support 00:15:01.937 Commands Supported & Effects Log Page: Not Supported 00:15:01.937 Feature Identifiers & Effects Log Page:May Support 00:15:01.937 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.937 Data Area 4 for Telemetry Log: Not Supported 00:15:01.937 Error Log Page Entries Supported: 128 00:15:01.937 Keep Alive: Not Supported 00:15:01.937 00:15:01.937 NVM Command Set Attributes 00:15:01.937 ========================== 00:15:01.937 Submission Queue Entry Size 00:15:01.937 Max: 1 00:15:01.937 Min: 1 00:15:01.937 Completion Queue Entry Size 00:15:01.937 Max: 1 00:15:01.937 Min: 1 00:15:01.937 Number of Namespaces: 0 00:15:01.937 Compare Command: Not Supported 00:15:01.937 Write Uncorrectable Command: Not Supported 00:15:01.937 Dataset Management Command: Not Supported 00:15:01.937 Write Zeroes Command: Not Supported 00:15:01.937 Set Features Save Field: Not Supported 00:15:01.937 Reservations: Not Supported 00:15:01.937 Timestamp: Not Supported 00:15:01.937 Copy: Not Supported 00:15:01.937 Volatile Write Cache: Not Present 00:15:01.937 Atomic Write Unit (Normal): 1 00:15:01.937 Atomic Write Unit (PFail): 1 00:15:01.937 Atomic Compare & Write Unit: 1 00:15:01.937 Fused Compare & Write: Supported 00:15:01.937 Scatter-Gather List 00:15:01.937 SGL Command Set: Supported 00:15:01.937 SGL Keyed: Supported 00:15:01.937 SGL Bit Bucket Descriptor: Not Supported 00:15:01.937 SGL Metadata Pointer: Not Supported 00:15:01.937 Oversized SGL: Not Supported 00:15:01.938 SGL Metadata Address: Not Supported 00:15:01.938 SGL Offset: Supported 00:15:01.938 Transport SGL Data Block: Not Supported 00:15:01.938 Replay Protected Memory Block: Not Supported 00:15:01.938 00:15:01.938 Firmware Slot Information 00:15:01.938 ========================= 00:15:01.938 Active slot: 0 00:15:01.938 00:15:01.938 00:15:01.938 Error Log 00:15:01.938 ========= 00:15:01.938 00:15:01.938 Active Namespaces 00:15:01.938 ================= 00:15:01.938 Discovery Log Page 00:15:01.938 ================== 00:15:01.938 Generation Counter: 2 00:15:01.938 Number of Records: 2 00:15:01.938 Record Format: 0 00:15:01.938 00:15:01.938 Discovery Log Entry 0 00:15:01.938 ---------------------- 00:15:01.938 Transport Type: 3 (TCP) 00:15:01.938 Address Family: 1 (IPv4) 00:15:01.938 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:01.938 Entry Flags: 00:15:01.938 Duplicate Returned Information: 1 00:15:01.938 Explicit Persistent Connection Support for Discovery: 1 00:15:01.938 Transport Requirements: 00:15:01.938 Secure Channel: Not Required 00:15:01.938 Port ID: 0 (0x0000) 00:15:01.938 Controller ID: 65535 (0xffff) 00:15:01.938 Admin Max SQ Size: 128 00:15:01.938 Transport Service Identifier: 4420 00:15:01.938 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:01.938 Transport Address: 10.0.0.3 00:15:01.938 Discovery Log Entry 1 00:15:01.938 ---------------------- 00:15:01.938 Transport Type: 3 (TCP) 00:15:01.938 Address Family: 1 (IPv4) 00:15:01.938 Subsystem Type: 2 (NVM Subsystem) 00:15:01.938 Entry Flags: 00:15:01.938 Duplicate Returned Information: 0 00:15:01.938 Explicit Persistent Connection Support for Discovery: 0 00:15:01.938 Transport Requirements: 00:15:01.938 Secure Channel: Not Required 00:15:01.938 Port ID: 0 (0x0000) 00:15:01.938 Controller ID: 65535 (0xffff) 00:15:01.938 Admin Max SQ Size: 128 00:15:01.938 Transport Service Identifier: 4420 00:15:01.938 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:01.938 Transport Address: 10.0.0.3 [2024-12-10 14:20:26.662755] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:01.938 [2024-12-10 14:20:26.662769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35740) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.662776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.938 [2024-12-10 14:20:26.662782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d358c0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.662786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.938 [2024-12-10 14:20:26.662792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35a40) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.938 [2024-12-10 14:20:26.662801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.938 [2024-12-10 14:20:26.662816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.662820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.662824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.662832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.662855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.662896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.662903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.662907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.662911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.662919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.662924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.662928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.662935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.662957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663089] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:01.938 [2024-12-10 14:20:26.663095] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:01.938 [2024-12-10 14:20:26.663106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.663123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.663144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.663234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.663252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.663340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.663373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.663461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.663478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.938 [2024-12-10 14:20:26.663560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.938 [2024-12-10 14:20:26.663577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.938 [2024-12-10 14:20:26.663618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.938 [2024-12-10 14:20:26.663625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.938 [2024-12-10 14:20:26.663629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.938 [2024-12-10 14:20:26.663643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.938 [2024-12-10 14:20:26.663652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.939 [2024-12-10 14:20:26.663659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.939 [2024-12-10 14:20:26.663677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.939 [2024-12-10 14:20:26.663723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.939 [2024-12-10 14:20:26.663730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.939 [2024-12-10 14:20:26.663734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.939 [2024-12-10 14:20:26.663749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.939 [2024-12-10 14:20:26.663766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.939 [2024-12-10 14:20:26.663783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.939 [2024-12-10 14:20:26.663828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.939 [2024-12-10 14:20:26.663835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.939 [2024-12-10 14:20:26.663839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.939 [2024-12-10 14:20:26.663853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.939 [2024-12-10 14:20:26.663869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.939 [2024-12-10 14:20:26.663886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.939 [2024-12-10 14:20:26.663927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.939 [2024-12-10 14:20:26.663934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.939 [2024-12-10 14:20:26.663938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.939 [2024-12-10 14:20:26.663953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.663978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cd1750) 00:15:01.939 [2024-12-10 14:20:26.668052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.939 [2024-12-10 14:20:26.668086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d35bc0, cid 3, qid 0 00:15:01.939 [2024-12-10 14:20:26.668141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.939 [2024-12-10 14:20:26.668165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.939 [2024-12-10 14:20:26.668169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.939 [2024-12-10 14:20:26.668174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d35bc0) on tqpair=0x1cd1750 00:15:01.939 [2024-12-10 14:20:26.668184] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:01.939 00:15:01.939 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:01.939 [2024-12-10 14:20:26.714222] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:01.939 [2024-12-10 14:20:26.714427] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74854 ] 00:15:02.202 [2024-12-10 14:20:26.870786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:02.202 [2024-12-10 14:20:26.870899] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:02.202 [2024-12-10 14:20:26.870907] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:02.202 [2024-12-10 14:20:26.870921] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:02.202 [2024-12-10 14:20:26.870933] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:02.202 [2024-12-10 14:20:26.871318] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:02.202 [2024-12-10 14:20:26.871387] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b9c750 0 00:15:02.202 [2024-12-10 14:20:26.876058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:02.202 [2024-12-10 14:20:26.876086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:02.202 [2024-12-10 14:20:26.876108] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:02.202 [2024-12-10 14:20:26.876112] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:02.202 [2024-12-10 14:20:26.876147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.876158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.876163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.202 [2024-12-10 14:20:26.876178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:02.202 [2024-12-10 14:20:26.876212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.202 [2024-12-10 14:20:26.884071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.202 [2024-12-10 14:20:26.884103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.202 [2024-12-10 14:20:26.884125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.202 [2024-12-10 14:20:26.884147] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:02.202 [2024-12-10 14:20:26.884158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:02.202 [2024-12-10 14:20:26.884166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:02.202 [2024-12-10 14:20:26.884195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.202 [2024-12-10 14:20:26.884217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.202 [2024-12-10 14:20:26.884250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.202 [2024-12-10 14:20:26.884319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.202 [2024-12-10 14:20:26.884326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.202 [2024-12-10 14:20:26.884329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.202 [2024-12-10 14:20:26.884339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:02.202 [2024-12-10 14:20:26.884347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:02.202 [2024-12-10 14:20:26.884355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.202 [2024-12-10 14:20:26.884370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.202 [2024-12-10 14:20:26.884388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.202 [2024-12-10 14:20:26.884462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.202 [2024-12-10 14:20:26.884469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.202 [2024-12-10 14:20:26.884473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.202 [2024-12-10 14:20:26.884484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:02.202 [2024-12-10 14:20:26.884493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.202 [2024-12-10 14:20:26.884502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.202 [2024-12-10 14:20:26.884517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.202 [2024-12-10 14:20:26.884536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.202 [2024-12-10 14:20:26.884577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.202 [2024-12-10 14:20:26.884585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.202 [2024-12-10 14:20:26.884589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.202 [2024-12-10 14:20:26.884599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.202 [2024-12-10 14:20:26.884610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.202 [2024-12-10 14:20:26.884626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.202 [2024-12-10 14:20:26.884644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.202 [2024-12-10 14:20:26.884692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.202 [2024-12-10 14:20:26.884699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.202 [2024-12-10 14:20:26.884702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.202 [2024-12-10 14:20:26.884706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.202 [2024-12-10 14:20:26.884712] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:02.203 [2024-12-10 14:20:26.884717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:02.203 [2024-12-10 14:20:26.884726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.203 [2024-12-10 14:20:26.884838] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:02.203 [2024-12-10 14:20:26.884844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.203 [2024-12-10 14:20:26.884854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.884858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.884862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.884870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.203 [2024-12-10 14:20:26.884889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.203 [2024-12-10 14:20:26.884934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.203 [2024-12-10 14:20:26.884942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.203 [2024-12-10 14:20:26.884945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.884950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.203 [2024-12-10 14:20:26.884955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.203 [2024-12-10 14:20:26.884966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.884971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.884975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.884982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.203 [2024-12-10 14:20:26.885029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.203 [2024-12-10 14:20:26.885065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.203 [2024-12-10 14:20:26.885072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.203 [2024-12-10 14:20:26.885076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.203 [2024-12-10 14:20:26.885085] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.203 [2024-12-10 14:20:26.885091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:02.203 [2024-12-10 14:20:26.885112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.203 [2024-12-10 14:20:26.885159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.203 [2024-12-10 14:20:26.885268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.203 [2024-12-10 14:20:26.885276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.203 [2024-12-10 14:20:26.885280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=4096, cccid=0 00:15:02.203 [2024-12-10 14:20:26.885290] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00740) on tqpair(0x1b9c750): expected_datao=0, payload_size=4096 00:15:02.203 [2024-12-10 14:20:26.885295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885304] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.203 [2024-12-10 14:20:26.885324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.203 [2024-12-10 14:20:26.885328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.203 [2024-12-10 14:20:26.885343] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:02.203 [2024-12-10 14:20:26.885348] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:02.203 [2024-12-10 14:20:26.885353] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:02.203 [2024-12-10 14:20:26.885358] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:02.203 [2024-12-10 14:20:26.885363] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:02.203 [2024-12-10 14:20:26.885369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:02.203 [2024-12-10 14:20:26.885423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.203 [2024-12-10 14:20:26.885471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.203 [2024-12-10 14:20:26.885479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.203 [2024-12-10 14:20:26.885482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.203 [2024-12-10 14:20:26.885500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.203 [2024-12-10 14:20:26.885522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.203 [2024-12-10 14:20:26.885542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.203 [2024-12-10 14:20:26.885561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.203 [2024-12-10 14:20:26.885580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.203 [2024-12-10 14:20:26.885630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00740, cid 0, qid 0 00:15:02.203 [2024-12-10 14:20:26.885637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c008c0, cid 1, qid 0 00:15:02.203 [2024-12-10 14:20:26.885642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00a40, cid 2, qid 0 00:15:02.203 [2024-12-10 14:20:26.885647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.203 [2024-12-10 14:20:26.885652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.203 [2024-12-10 14:20:26.885732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.203 [2024-12-10 14:20:26.885739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.203 [2024-12-10 14:20:26.885743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.203 [2024-12-10 14:20:26.885753] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:02.203 [2024-12-10 14:20:26.885762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.203 [2024-12-10 14:20:26.885786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.203 [2024-12-10 14:20:26.885794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.203 [2024-12-10 14:20:26.885802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:02.203 [2024-12-10 14:20:26.885820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.204 [2024-12-10 14:20:26.885871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.885878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.885882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.885886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.885980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.885995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.886038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.204 [2024-12-10 14:20:26.886110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.204 [2024-12-10 14:20:26.886118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.204 [2024-12-10 14:20:26.886122] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886126] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=4096, cccid=4 00:15:02.204 [2024-12-10 14:20:26.886131] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00d40) on tqpair(0x1b9c750): expected_datao=0, payload_size=4096 00:15:02.204 [2024-12-10 14:20:26.886136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886143] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886148] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886190] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:02.204 [2024-12-10 14:20:26.886207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.886261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.204 [2024-12-10 14:20:26.886338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.204 [2024-12-10 14:20:26.886345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.204 [2024-12-10 14:20:26.886349] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886353] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=4096, cccid=4 00:15:02.204 [2024-12-10 14:20:26.886358] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00d40) on tqpair(0x1b9c750): expected_datao=0, payload_size=4096 00:15:02.204 [2024-12-10 14:20:26.886363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886370] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886374] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.886479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.204 [2024-12-10 14:20:26.886541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.204 [2024-12-10 14:20:26.886548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.204 [2024-12-10 14:20:26.886552] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886555] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=4096, cccid=4 00:15:02.204 [2024-12-10 14:20:26.886560] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00d40) on tqpair(0x1b9c750): expected_datao=0, payload_size=4096 00:15:02.204 [2024-12-10 14:20:26.886565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886572] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886576] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886651] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.204 [2024-12-10 14:20:26.886656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:02.204 [2024-12-10 14:20:26.886662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:02.204 [2024-12-10 14:20:26.886682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.886702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.204 [2024-12-10 14:20:26.886741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.204 [2024-12-10 14:20:26.886749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00ec0, cid 5, qid 0 00:15:02.204 [2024-12-10 14:20:26.886811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00ec0) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.886868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.886885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00ec0, cid 5, qid 0 00:15:02.204 [2024-12-10 14:20:26.886931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.886938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.886942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00ec0) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.886956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.886961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9c750) 00:15:02.204 [2024-12-10 14:20:26.887008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.204 [2024-12-10 14:20:26.887030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00ec0, cid 5, qid 0 00:15:02.204 [2024-12-10 14:20:26.887079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.204 [2024-12-10 14:20:26.887086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.204 [2024-12-10 14:20:26.887090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.204 [2024-12-10 14:20:26.887095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00ec0) on tqpair=0x1b9c750 00:15:02.204 [2024-12-10 14:20:26.887106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9c750) 00:15:02.205 [2024-12-10 14:20:26.887119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.205 [2024-12-10 14:20:26.887137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00ec0, cid 5, qid 0 00:15:02.205 [2024-12-10 14:20:26.887189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.205 [2024-12-10 14:20:26.887196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.205 [2024-12-10 14:20:26.887200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00ec0) on tqpair=0x1b9c750 00:15:02.205 [2024-12-10 14:20:26.887225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9c750) 00:15:02.205 [2024-12-10 14:20:26.887239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.205 [2024-12-10 14:20:26.887247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9c750) 00:15:02.205 [2024-12-10 14:20:26.887259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.205 [2024-12-10 14:20:26.887267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b9c750) 00:15:02.205 [2024-12-10 14:20:26.887278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.205 [2024-12-10 14:20:26.887305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b9c750) 00:15:02.205 [2024-12-10 14:20:26.887317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.205 [2024-12-10 14:20:26.887337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00ec0, cid 5, qid 0 00:15:02.205 [2024-12-10 14:20:26.887344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00d40, cid 4, qid 0 00:15:02.205 [2024-12-10 14:20:26.887349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01040, cid 6, qid 0 00:15:02.205 [2024-12-10 14:20:26.887355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c011c0, cid 7, qid 0 00:15:02.205 [2024-12-10 14:20:26.887502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.205 [2024-12-10 14:20:26.887509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.205 [2024-12-10 14:20:26.887513] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887517] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=8192, cccid=5 00:15:02.205 [2024-12-10 14:20:26.887521] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00ec0) on tqpair(0x1b9c750): expected_datao=0, payload_size=8192 00:15:02.205 [2024-12-10 14:20:26.887526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887543] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887548] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.205 [2024-12-10 14:20:26.887560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.205 [2024-12-10 14:20:26.887563] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887567] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=512, cccid=4 00:15:02.205 [2024-12-10 14:20:26.887572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c00d40) on tqpair(0x1b9c750): expected_datao=0, payload_size=512 00:15:02.205 [2024-12-10 14:20:26.887577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887583] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887587] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.205 [2024-12-10 14:20:26.887599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.205 [2024-12-10 14:20:26.887619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887623] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=512, cccid=6 00:15:02.205 [2024-12-10 14:20:26.887627] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01040) on tqpair(0x1b9c750): expected_datao=0, payload_size=512 00:15:02.205 [2024-12-10 14:20:26.887632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887639] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887642] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.205 [2024-12-10 14:20:26.887654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.205 [2024-12-10 14:20:26.887658] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887662] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9c750): datao=0, datal=4096, cccid=7 00:15:02.205 [2024-12-10 14:20:26.887667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c011c0) on tqpair(0x1b9c750): expected_datao=0, payload_size=4096 00:15:02.205 [2024-12-10 14:20:26.887671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887678] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887682] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.205 [2024-12-10 14:20:26.887697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.205 [2024-12-10 14:20:26.887700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.205 [2024-12-10 14:20:26.887705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00ec0) on tqpair=0x1b9c750 00:15:02.205 [2024-12-10 14:20:26.887721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.205 ===================================================== 00:15:02.205 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.205 ===================================================== 00:15:02.205 Controller Capabilities/Features 00:15:02.205 ================================ 00:15:02.205 Vendor ID: 8086 00:15:02.205 Subsystem Vendor ID: 8086 00:15:02.205 Serial Number: SPDK00000000000001 00:15:02.205 Model Number: SPDK bdev Controller 00:15:02.205 Firmware Version: 25.01 00:15:02.205 Recommended Arb Burst: 6 00:15:02.205 IEEE OUI Identifier: e4 d2 5c 00:15:02.205 Multi-path I/O 00:15:02.205 May have multiple subsystem ports: Yes 00:15:02.205 May have multiple controllers: Yes 00:15:02.205 Associated with SR-IOV VF: No 00:15:02.205 Max Data Transfer Size: 131072 00:15:02.205 Max Number of Namespaces: 32 00:15:02.205 Max Number of I/O Queues: 127 00:15:02.205 NVMe Specification Version (VS): 1.3 00:15:02.205 NVMe Specification Version (Identify): 1.3 00:15:02.205 Maximum Queue Entries: 128 00:15:02.205 Contiguous Queues Required: Yes 00:15:02.205 Arbitration Mechanisms Supported 00:15:02.205 Weighted Round Robin: Not Supported 00:15:02.205 Vendor Specific: Not Supported 00:15:02.205 Reset Timeout: 15000 ms 00:15:02.205 Doorbell Stride: 4 bytes 00:15:02.205 NVM Subsystem Reset: Not Supported 00:15:02.205 Command Sets Supported 00:15:02.205 NVM Command Set: Supported 00:15:02.205 Boot Partition: Not Supported 00:15:02.205 Memory Page Size Minimum: 4096 bytes 00:15:02.205 Memory Page Size Maximum: 4096 bytes 00:15:02.205 Persistent Memory Region: Not Supported 00:15:02.205 Optional Asynchronous Events Supported 00:15:02.205 Namespace Attribute Notices: Supported 00:15:02.205 Firmware Activation Notices: Not Supported 00:15:02.205 ANA Change Notices: Not Supported 00:15:02.205 PLE Aggregate Log Change Notices: Not Supported 00:15:02.205 LBA Status Info Alert Notices: Not Supported 00:15:02.205 EGE Aggregate Log Change Notices: Not Supported 00:15:02.205 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.205 Zone Descriptor Change Notices: Not Supported 00:15:02.205 Discovery Log Change Notices: Not Supported 00:15:02.205 Controller Attributes 00:15:02.205 128-bit Host Identifier: Supported 00:15:02.205 Non-Operational Permissive Mode: Not Supported 00:15:02.205 NVM Sets: Not Supported 00:15:02.205 Read Recovery Levels: Not Supported 00:15:02.205 Endurance Groups: Not Supported 00:15:02.205 Predictable Latency Mode: Not Supported 00:15:02.205 Traffic Based Keep ALive: Not Supported 00:15:02.205 Namespace Granularity: Not Supported 00:15:02.205 SQ Associations: Not Supported 00:15:02.205 UUID List: Not Supported 00:15:02.205 Multi-Domain Subsystem: Not Supported 00:15:02.205 Fixed Capacity Management: Not Supported 00:15:02.205 Variable Capacity Management: Not Supported 00:15:02.205 Delete Endurance Group: Not Supported 00:15:02.205 Delete NVM Set: Not Supported 00:15:02.205 Extended LBA Formats Supported: Not Supported 00:15:02.205 Flexible Data Placement Supported: Not Supported 00:15:02.205 00:15:02.205 Controller Memory Buffer Support 00:15:02.205 ================================ 00:15:02.205 Supported: No 00:15:02.205 00:15:02.205 Persistent Memory Region Support 00:15:02.205 ================================ 00:15:02.205 Supported: No 00:15:02.205 00:15:02.205 Admin Command Set Attributes 00:15:02.205 ============================ 00:15:02.205 Security Send/Receive: Not Supported 00:15:02.205 Format NVM: Not Supported 00:15:02.205 Firmware Activate/Download: Not Supported 00:15:02.205 Namespace Management: Not Supported 00:15:02.205 Device Self-Test: Not Supported 00:15:02.205 Directives: Not Supported 00:15:02.205 NVMe-MI: Not Supported 00:15:02.205 Virtualization Management: Not Supported 00:15:02.205 Doorbell Buffer Config: Not Supported 00:15:02.205 Get LBA Status Capability: Not Supported 00:15:02.205 Command & Feature Lockdown Capability: Not Supported 00:15:02.205 Abort Command Limit: 4 00:15:02.205 Async Event Request Limit: 4 00:15:02.206 Number of Firmware Slots: N/A 00:15:02.206 Firmware Slot 1 Read-Only: N/A 00:15:02.206 Firmware Activation Without Reset: [2024-12-10 14:20:26.887728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.206 [2024-12-10 14:20:26.887732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.887736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00d40) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.887748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.206 [2024-12-10 14:20:26.887755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.206 [2024-12-10 14:20:26.887758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.887762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01040) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.887770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.206 [2024-12-10 14:20:26.887776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.206 [2024-12-10 14:20:26.887780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.887784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c011c0) on tqpair=0x1b9c750 00:15:02.206 N/A 00:15:02.206 Multiple Update Detection Support: N/A 00:15:02.206 Firmware Update Granularity: No Information Provided 00:15:02.206 Per-Namespace SMART Log: No 00:15:02.206 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.206 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:02.206 Command Effects Log Page: Supported 00:15:02.206 Get Log Page Extended Data: Supported 00:15:02.206 Telemetry Log Pages: Not Supported 00:15:02.206 Persistent Event Log Pages: Not Supported 00:15:02.206 Supported Log Pages Log Page: May Support 00:15:02.206 Commands Supported & Effects Log Page: Not Supported 00:15:02.206 Feature Identifiers & Effects Log Page:May Support 00:15:02.206 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.206 Data Area 4 for Telemetry Log: Not Supported 00:15:02.206 Error Log Page Entries Supported: 128 00:15:02.206 Keep Alive: Supported 00:15:02.206 Keep Alive Granularity: 10000 ms 00:15:02.206 00:15:02.206 NVM Command Set Attributes 00:15:02.206 ========================== 00:15:02.206 Submission Queue Entry Size 00:15:02.206 Max: 64 00:15:02.206 Min: 64 00:15:02.206 Completion Queue Entry Size 00:15:02.206 Max: 16 00:15:02.206 Min: 16 00:15:02.206 Number of Namespaces: 32 00:15:02.206 Compare Command: Supported 00:15:02.206 Write Uncorrectable Command: Not Supported 00:15:02.206 Dataset Management Command: Supported 00:15:02.206 Write Zeroes Command: Supported 00:15:02.206 Set Features Save Field: Not Supported 00:15:02.206 Reservations: Supported 00:15:02.206 Timestamp: Not Supported 00:15:02.206 Copy: Supported 00:15:02.206 Volatile Write Cache: Present 00:15:02.206 Atomic Write Unit (Normal): 1 00:15:02.206 Atomic Write Unit (PFail): 1 00:15:02.206 Atomic Compare & Write Unit: 1 00:15:02.206 Fused Compare & Write: Supported 00:15:02.206 Scatter-Gather List 00:15:02.206 SGL Command Set: Supported 00:15:02.206 SGL Keyed: Supported 00:15:02.206 SGL Bit Bucket Descriptor: Not Supported 00:15:02.206 SGL Metadata Pointer: Not Supported 00:15:02.206 Oversized SGL: Not Supported 00:15:02.206 SGL Metadata Address: Not Supported 00:15:02.206 SGL Offset: Supported 00:15:02.206 Transport SGL Data Block: Not Supported 00:15:02.206 Replay Protected Memory Block: Not Supported 00:15:02.206 00:15:02.206 Firmware Slot Information 00:15:02.206 ========================= 00:15:02.206 Active slot: 1 00:15:02.206 Slot 1 Firmware Revision: 25.01 00:15:02.206 00:15:02.206 00:15:02.206 Commands Supported and Effects 00:15:02.206 ============================== 00:15:02.206 Admin Commands 00:15:02.206 -------------- 00:15:02.206 Get Log Page (02h): Supported 00:15:02.206 Identify (06h): Supported 00:15:02.206 Abort (08h): Supported 00:15:02.206 Set Features (09h): Supported 00:15:02.206 Get Features (0Ah): Supported 00:15:02.206 Asynchronous Event Request (0Ch): Supported 00:15:02.206 Keep Alive (18h): Supported 00:15:02.206 I/O Commands 00:15:02.206 ------------ 00:15:02.206 Flush (00h): Supported LBA-Change 00:15:02.206 Write (01h): Supported LBA-Change 00:15:02.206 Read (02h): Supported 00:15:02.206 Compare (05h): Supported 00:15:02.206 Write Zeroes (08h): Supported LBA-Change 00:15:02.206 Dataset Management (09h): Supported LBA-Change 00:15:02.206 Copy (19h): Supported LBA-Change 00:15:02.206 00:15:02.206 Error Log 00:15:02.206 ========= 00:15:02.206 00:15:02.206 Arbitration 00:15:02.206 =========== 00:15:02.206 Arbitration Burst: 1 00:15:02.206 00:15:02.206 Power Management 00:15:02.206 ================ 00:15:02.206 Number of Power States: 1 00:15:02.206 Current Power State: Power State #0 00:15:02.206 Power State #0: 00:15:02.206 Max Power: 0.00 W 00:15:02.206 Non-Operational State: Operational 00:15:02.206 Entry Latency: Not Reported 00:15:02.206 Exit Latency: Not Reported 00:15:02.206 Relative Read Throughput: 0 00:15:02.206 Relative Read Latency: 0 00:15:02.206 Relative Write Throughput: 0 00:15:02.206 Relative Write Latency: 0 00:15:02.206 Idle Power: Not Reported 00:15:02.206 Active Power: Not Reported 00:15:02.206 Non-Operational Permissive Mode: Not Supported 00:15:02.206 00:15:02.206 Health Information 00:15:02.206 ================== 00:15:02.206 Critical Warnings: 00:15:02.206 Available Spare Space: OK 00:15:02.206 Temperature: OK 00:15:02.206 Device Reliability: OK 00:15:02.206 Read Only: No 00:15:02.206 Volatile Memory Backup: OK 00:15:02.206 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:02.206 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:02.206 Available Spare: 0% 00:15:02.206 Available Spare Threshold: 0% 00:15:02.206 Life Percentage Used:[2024-12-10 14:20:26.887892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.887900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b9c750) 00:15:02.206 [2024-12-10 14:20:26.887909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.206 [2024-12-10 14:20:26.887933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c011c0, cid 7, qid 0 00:15:02.206 [2024-12-10 14:20:26.887989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.206 [2024-12-10 14:20:26.888746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.206 [2024-12-10 14:20:26.888772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.888794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c011c0) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.888867] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:02.206 [2024-12-10 14:20:26.888888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00740) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.888897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.206 [2024-12-10 14:20:26.888903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c008c0) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.888909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.206 [2024-12-10 14:20:26.888914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00a40) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.888919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.206 [2024-12-10 14:20:26.888924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.206 [2024-12-10 14:20:26.888929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.206 [2024-12-10 14:20:26.888941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.206 [2024-12-10 14:20:26.888946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.888950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.888991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889262] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:02.207 [2024-12-10 14:20:26.889267] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:02.207 [2024-12-10 14:20:26.889279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.889897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.889904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.889908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.889923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.889932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.889939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.889973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.890034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.890043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.890048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.890064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.890081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.890101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.890146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.890153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.890158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.890173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.890190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.890208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.890249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.890257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.890261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.890277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.207 [2024-12-10 14:20:26.890294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.207 [2024-12-10 14:20:26.890311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.207 [2024-12-10 14:20:26.890357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.207 [2024-12-10 14:20:26.890364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.207 [2024-12-10 14:20:26.890369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.207 [2024-12-10 14:20:26.890373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.207 [2024-12-10 14:20:26.890384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.890418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.890462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.890475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.890480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.890496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.890532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.890579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.890586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.890590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.890606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.890640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.890698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.890722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.890726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.890742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.890776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.890823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.890831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.890835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.890850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.890884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.890930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.890938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.890942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.890957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.890990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.890998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.208 [2024-12-10 14:20:26.891672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.208 [2024-12-10 14:20:26.891689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.208 [2024-12-10 14:20:26.891731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.208 [2024-12-10 14:20:26.891738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.208 [2024-12-10 14:20:26.891742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.208 [2024-12-10 14:20:26.891757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.208 [2024-12-10 14:20:26.891765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.209 [2024-12-10 14:20:26.891773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.209 [2024-12-10 14:20:26.891790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.209 [2024-12-10 14:20:26.891832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.209 [2024-12-10 14:20:26.891839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.209 [2024-12-10 14:20:26.891843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.209 [2024-12-10 14:20:26.891858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.209 [2024-12-10 14:20:26.891875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.209 [2024-12-10 14:20:26.891892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.209 [2024-12-10 14:20:26.891937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.209 [2024-12-10 14:20:26.891944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.209 [2024-12-10 14:20:26.891949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.209 [2024-12-10 14:20:26.891963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.891989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.209 [2024-12-10 14:20:26.891996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.209 [2024-12-10 14:20:26.898038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.209 [2024-12-10 14:20:26.898085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.209 [2024-12-10 14:20:26.898094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.209 [2024-12-10 14:20:26.898098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.898117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.209 [2024-12-10 14:20:26.898133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.898139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.898144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9c750) 00:15:02.209 [2024-12-10 14:20:26.898153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.209 [2024-12-10 14:20:26.898180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c00bc0, cid 3, qid 0 00:15:02.209 [2024-12-10 14:20:26.898244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.209 [2024-12-10 14:20:26.898252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.209 [2024-12-10 14:20:26.898256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.209 [2024-12-10 14:20:26.898260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c00bc0) on tqpair=0x1b9c750 00:15:02.209 [2024-12-10 14:20:26.898270] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:15:02.209 0% 00:15:02.209 Data Units Read: 0 00:15:02.209 Data Units Written: 0 00:15:02.209 Host Read Commands: 0 00:15:02.209 Host Write Commands: 0 00:15:02.209 Controller Busy Time: 0 minutes 00:15:02.209 Power Cycles: 0 00:15:02.209 Power On Hours: 0 hours 00:15:02.209 Unsafe Shutdowns: 0 00:15:02.209 Unrecoverable Media Errors: 0 00:15:02.209 Lifetime Error Log Entries: 0 00:15:02.209 Warning Temperature Time: 0 minutes 00:15:02.209 Critical Temperature Time: 0 minutes 00:15:02.209 00:15:02.209 Number of Queues 00:15:02.209 ================ 00:15:02.209 Number of I/O Submission Queues: 127 00:15:02.209 Number of I/O Completion Queues: 127 00:15:02.209 00:15:02.209 Active Namespaces 00:15:02.209 ================= 00:15:02.209 Namespace ID:1 00:15:02.209 Error Recovery Timeout: Unlimited 00:15:02.209 Command Set Identifier: NVM (00h) 00:15:02.209 Deallocate: Supported 00:15:02.209 Deallocated/Unwritten Error: Not Supported 00:15:02.209 Deallocated Read Value: Unknown 00:15:02.209 Deallocate in Write Zeroes: Not Supported 00:15:02.209 Deallocated Guard Field: 0xFFFF 00:15:02.209 Flush: Supported 00:15:02.209 Reservation: Supported 00:15:02.209 Namespace Sharing Capabilities: Multiple Controllers 00:15:02.209 Size (in LBAs): 131072 (0GiB) 00:15:02.209 Capacity (in LBAs): 131072 (0GiB) 00:15:02.209 Utilization (in LBAs): 131072 (0GiB) 00:15:02.209 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:02.209 EUI64: ABCDEF0123456789 00:15:02.209 UUID: 72e0f0ee-235c-46b7-9b54-e3aca3a5b20c 00:15:02.209 Thin Provisioning: Not Supported 00:15:02.209 Per-NS Atomic Units: Yes 00:15:02.209 Atomic Boundary Size (Normal): 0 00:15:02.209 Atomic Boundary Size (PFail): 0 00:15:02.209 Atomic Boundary Offset: 0 00:15:02.209 Maximum Single Source Range Length: 65535 00:15:02.209 Maximum Copy Length: 65535 00:15:02.209 Maximum Source Range Count: 1 00:15:02.209 NGUID/EUI64 Never Reused: No 00:15:02.209 Namespace Write Protected: No 00:15:02.209 Number of LBA Formats: 1 00:15:02.209 Current LBA Format: LBA Format #00 00:15:02.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:02.209 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.209 14:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.209 rmmod nvme_tcp 00:15:02.209 rmmod nvme_fabrics 00:15:02.209 rmmod nvme_keyring 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74819 ']' 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74819 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74819 ']' 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74819 00:15:02.209 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74819 00:15:02.469 killing process with pid 74819 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74819' 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74819 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74819 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:02.469 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:02.728 ************************************ 00:15:02.728 END TEST nvmf_identify 00:15:02.728 ************************************ 00:15:02.728 00:15:02.728 real 0m2.167s 00:15:02.728 user 0m4.314s 00:15:02.728 sys 0m0.672s 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:02.728 ************************************ 00:15:02.728 START TEST nvmf_perf 00:15:02.728 ************************************ 00:15:02.728 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:03.021 * Looking for test storage... 00:15:03.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.021 --rc genhtml_branch_coverage=1 00:15:03.021 --rc genhtml_function_coverage=1 00:15:03.021 --rc genhtml_legend=1 00:15:03.021 --rc geninfo_all_blocks=1 00:15:03.021 --rc geninfo_unexecuted_blocks=1 00:15:03.021 00:15:03.021 ' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.021 --rc genhtml_branch_coverage=1 00:15:03.021 --rc genhtml_function_coverage=1 00:15:03.021 --rc genhtml_legend=1 00:15:03.021 --rc geninfo_all_blocks=1 00:15:03.021 --rc geninfo_unexecuted_blocks=1 00:15:03.021 00:15:03.021 ' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.021 --rc genhtml_branch_coverage=1 00:15:03.021 --rc genhtml_function_coverage=1 00:15:03.021 --rc genhtml_legend=1 00:15:03.021 --rc geninfo_all_blocks=1 00:15:03.021 --rc geninfo_unexecuted_blocks=1 00:15:03.021 00:15:03.021 ' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.021 --rc genhtml_branch_coverage=1 00:15:03.021 --rc genhtml_function_coverage=1 00:15:03.021 --rc genhtml_legend=1 00:15:03.021 --rc geninfo_all_blocks=1 00:15:03.021 --rc geninfo_unexecuted_blocks=1 00:15:03.021 00:15:03.021 ' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.021 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.022 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:03.022 Cannot find device "nvmf_init_br" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:03.022 Cannot find device "nvmf_init_br2" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:03.022 Cannot find device "nvmf_tgt_br" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.022 Cannot find device "nvmf_tgt_br2" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:03.022 Cannot find device "nvmf_init_br" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:03.022 Cannot find device "nvmf_init_br2" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:03.022 Cannot find device "nvmf_tgt_br" 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:03.022 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:03.281 Cannot find device "nvmf_tgt_br2" 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:03.281 Cannot find device "nvmf_br" 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:03.281 Cannot find device "nvmf_init_if" 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:03.281 Cannot find device "nvmf_init_if2" 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:03.281 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:03.282 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:03.282 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:03.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:15:03.541 00:15:03.541 --- 10.0.0.3 ping statistics --- 00:15:03.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.541 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:03.541 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:03.541 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:03.541 00:15:03.541 --- 10.0.0.4 ping statistics --- 00:15:03.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.541 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:03.541 00:15:03.541 --- 10.0.0.1 ping statistics --- 00:15:03.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.541 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:03.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:15:03.541 00:15:03.541 --- 10.0.0.2 ping statistics --- 00:15:03.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.541 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=75069 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 75069 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 75069 ']' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.541 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.541 [2024-12-10 14:20:28.234393] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:03.541 [2024-12-10 14:20:28.234498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.800 [2024-12-10 14:20:28.388530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.800 [2024-12-10 14:20:28.431018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.800 [2024-12-10 14:20:28.431070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.800 [2024-12-10 14:20:28.431085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.800 [2024-12-10 14:20:28.431095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.800 [2024-12-10 14:20:28.431104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.800 [2024-12-10 14:20:28.432087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.800 [2024-12-10 14:20:28.435999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.800 [2024-12-10 14:20:28.436154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.800 [2024-12-10 14:20:28.436165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.800 [2024-12-10 14:20:28.472191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:03.800 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:04.368 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:04.368 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:04.626 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:04.626 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:04.885 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:04.885 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:04.885 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:04.885 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:04.885 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:05.144 [2024-12-10 14:20:29.929532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.144 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.402 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:05.402 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.660 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:05.660 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:05.918 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:06.177 [2024-12-10 14:20:30.946878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:06.177 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:06.436 14:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:06.436 14:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:06.436 14:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:06.436 14:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:07.812 Initializing NVMe Controllers 00:15:07.812 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:07.812 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:07.812 Initialization complete. Launching workers. 00:15:07.812 ======================================================== 00:15:07.812 Latency(us) 00:15:07.812 Device Information : IOPS MiB/s Average min max 00:15:07.812 PCIE (0000:00:10.0) NSID 1 from core 0: 21705.56 84.79 1473.71 381.03 7985.58 00:15:07.812 ======================================================== 00:15:07.812 Total : 21705.56 84.79 1473.71 381.03 7985.58 00:15:07.812 00:15:07.812 14:20:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:09.189 Initializing NVMe Controllers 00:15:09.189 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.189 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:09.189 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:09.189 Initialization complete. Launching workers. 00:15:09.189 ======================================================== 00:15:09.189 Latency(us) 00:15:09.189 Device Information : IOPS MiB/s Average min max 00:15:09.189 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3862.95 15.09 257.42 92.65 4283.52 00:15:09.189 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8055.22 6940.64 11978.62 00:15:09.189 ======================================================== 00:15:09.189 Total : 3987.95 15.58 501.83 92.65 11978.62 00:15:09.189 00:15:09.189 14:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:10.566 Initializing NVMe Controllers 00:15:10.566 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.566 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.566 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.566 Initialization complete. Launching workers. 00:15:10.566 ======================================================== 00:15:10.566 Latency(us) 00:15:10.566 Device Information : IOPS MiB/s Average min max 00:15:10.566 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8969.90 35.04 3567.96 567.72 10266.83 00:15:10.566 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.43 15.14 8271.27 4854.28 16904.49 00:15:10.566 ======================================================== 00:15:10.566 Total : 12845.33 50.18 4986.95 567.72 16904.49 00:15:10.566 00:15:10.566 14:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:10.566 14:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:13.098 Initializing NVMe Controllers 00:15:13.098 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.098 Controller IO queue size 128, less than required. 00:15:13.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.098 Controller IO queue size 128, less than required. 00:15:13.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.098 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.098 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:13.098 Initialization complete. Launching workers. 00:15:13.098 ======================================================== 00:15:13.098 Latency(us) 00:15:13.098 Device Information : IOPS MiB/s Average min max 00:15:13.098 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1826.13 456.53 70671.87 36311.37 125894.35 00:15:13.098 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 643.14 160.78 207290.23 56882.98 368294.42 00:15:13.098 ======================================================== 00:15:13.098 Total : 2469.27 617.32 106255.03 36311.37 368294.42 00:15:13.098 00:15:13.098 14:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:13.357 Initializing NVMe Controllers 00:15:13.357 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.357 Controller IO queue size 128, less than required. 00:15:13.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.357 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:13.357 Controller IO queue size 128, less than required. 00:15:13.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.357 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:13.357 WARNING: Some requested NVMe devices were skipped 00:15:13.357 No valid NVMe controllers or AIO or URING devices found 00:15:13.357 14:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:15.909 Initializing NVMe Controllers 00:15:15.909 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.909 Controller IO queue size 128, less than required. 00:15:15.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.909 Controller IO queue size 128, less than required. 00:15:15.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.909 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.909 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:15.909 Initialization complete. Launching workers. 00:15:15.909 00:15:15.909 ==================== 00:15:15.909 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:15.909 TCP transport: 00:15:15.909 polls: 9804 00:15:15.909 idle_polls: 5634 00:15:15.909 sock_completions: 4170 00:15:15.909 nvme_completions: 7189 00:15:15.909 submitted_requests: 10930 00:15:15.909 queued_requests: 1 00:15:15.909 00:15:15.909 ==================== 00:15:15.909 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:15.909 TCP transport: 00:15:15.909 polls: 12692 00:15:15.909 idle_polls: 8394 00:15:15.909 sock_completions: 4298 00:15:15.909 nvme_completions: 6877 00:15:15.909 submitted_requests: 10284 00:15:15.909 queued_requests: 1 00:15:15.909 ======================================================== 00:15:15.909 Latency(us) 00:15:15.909 Device Information : IOPS MiB/s Average min max 00:15:15.909 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1792.02 448.00 72811.99 37487.83 98148.32 00:15:15.909 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1714.24 428.56 75998.46 25615.65 121272.35 00:15:15.909 ======================================================== 00:15:15.909 Total : 3506.25 876.56 74369.88 25615.65 121272.35 00:15:15.909 00:15:15.909 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:15.909 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.168 rmmod nvme_tcp 00:15:16.168 rmmod nvme_fabrics 00:15:16.168 rmmod nvme_keyring 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 75069 ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 75069 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 75069 ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 75069 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75069 00:15:16.168 killing process with pid 75069 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75069' 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 75069 00:15:16.168 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 75069 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.736 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:16.995 00:15:16.995 real 0m14.273s 00:15:16.995 user 0m51.616s 00:15:16.995 sys 0m4.046s 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:16.995 ************************************ 00:15:16.995 END TEST nvmf_perf 00:15:16.995 ************************************ 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.995 14:20:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.255 ************************************ 00:15:17.255 START TEST nvmf_fio_host 00:15:17.255 ************************************ 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:17.255 * Looking for test storage... 00:15:17.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.255 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:17.256 14:20:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:17.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.256 --rc genhtml_branch_coverage=1 00:15:17.256 --rc genhtml_function_coverage=1 00:15:17.256 --rc genhtml_legend=1 00:15:17.256 --rc geninfo_all_blocks=1 00:15:17.256 --rc geninfo_unexecuted_blocks=1 00:15:17.256 00:15:17.256 ' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:17.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.256 --rc genhtml_branch_coverage=1 00:15:17.256 --rc genhtml_function_coverage=1 00:15:17.256 --rc genhtml_legend=1 00:15:17.256 --rc geninfo_all_blocks=1 00:15:17.256 --rc geninfo_unexecuted_blocks=1 00:15:17.256 00:15:17.256 ' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:17.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.256 --rc genhtml_branch_coverage=1 00:15:17.256 --rc genhtml_function_coverage=1 00:15:17.256 --rc genhtml_legend=1 00:15:17.256 --rc geninfo_all_blocks=1 00:15:17.256 --rc geninfo_unexecuted_blocks=1 00:15:17.256 00:15:17.256 ' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:17.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.256 --rc genhtml_branch_coverage=1 00:15:17.256 --rc genhtml_function_coverage=1 00:15:17.256 --rc genhtml_legend=1 00:15:17.256 --rc geninfo_all_blocks=1 00:15:17.256 --rc geninfo_unexecuted_blocks=1 00:15:17.256 00:15:17.256 ' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.256 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.257 Cannot find device "nvmf_init_br" 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.257 Cannot find device "nvmf_init_br2" 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.257 Cannot find device "nvmf_tgt_br" 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:17.257 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.516 Cannot find device "nvmf_tgt_br2" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.516 Cannot find device "nvmf_init_br" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.516 Cannot find device "nvmf_init_br2" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.516 Cannot find device "nvmf_tgt_br" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.516 Cannot find device "nvmf_tgt_br2" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:17.516 Cannot find device "nvmf_br" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:17.516 Cannot find device "nvmf_init_if" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:17.516 Cannot find device "nvmf_init_if2" 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.516 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:17.776 00:15:17.776 --- 10.0.0.3 ping statistics --- 00:15:17.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.776 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:15:17.776 00:15:17.776 --- 10.0.0.4 ping statistics --- 00:15:17.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.776 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:17.776 00:15:17.776 --- 10.0.0.1 ping statistics --- 00:15:17.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.776 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:17.776 00:15:17.776 --- 10.0.0.2 ping statistics --- 00:15:17.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.776 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75523 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75523 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75523 ']' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.776 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.776 [2024-12-10 14:20:42.503142] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:17.776 [2024-12-10 14:20:42.503580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.036 [2024-12-10 14:20:42.645408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.036 [2024-12-10 14:20:42.677483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.036 [2024-12-10 14:20:42.677736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.036 [2024-12-10 14:20:42.677904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.036 [2024-12-10 14:20:42.678165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.036 [2024-12-10 14:20:42.678377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.036 [2024-12-10 14:20:42.679468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.036 [2024-12-10 14:20:42.679593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.036 [2024-12-10 14:20:42.680199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.036 [2024-12-10 14:20:42.680208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.036 [2024-12-10 14:20:42.710135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.036 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.036 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:18.036 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:18.294 [2024-12-10 14:20:43.053396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.294 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:18.294 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.294 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.294 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.553 Malloc1 00:15:18.553 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.812 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.071 14:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:19.330 [2024-12-10 14:20:44.110480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:19.330 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:19.590 14:20:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:19.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:19.855 fio-3.35 00:15:19.855 Starting 1 thread 00:15:22.392 00:15:22.392 test: (groupid=0, jobs=1): err= 0: pid=75593: Tue Dec 10 14:20:46 2024 00:15:22.392 read: IOPS=9024, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec) 00:15:22.392 slat (nsec): min=1864, max=236133, avg=2439.70, stdev=2510.28 00:15:22.392 clat (usec): min=1793, max=13980, avg=7370.02, stdev=617.75 00:15:22.392 lat (usec): min=1824, max=13983, avg=7372.46, stdev=617.59 00:15:22.392 clat percentiles (usec): 00:15:22.392 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:15:22.392 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:15:22.392 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:15:22.392 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[12256], 99.95th=[13698], 00:15:22.392 | 99.99th=[13960] 00:15:22.392 bw ( KiB/s): min=35296, max=37176, per=99.97%, avg=36088.00, stdev=842.93, samples=4 00:15:22.392 iops : min= 8824, max= 9294, avg=9022.00, stdev=210.73, samples=4 00:15:22.392 write: IOPS=9043, BW=35.3MiB/s (37.0MB/s)(70.9MiB/2007msec); 0 zone resets 00:15:22.392 slat (nsec): min=1919, max=162531, avg=2490.42, stdev=1792.13 00:15:22.392 clat (usec): min=1693, max=13664, avg=6734.72, stdev=554.65 00:15:22.392 lat (usec): min=1703, max=13666, avg=6737.21, stdev=554.60 00:15:22.392 clat percentiles (usec): 00:15:22.392 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:15:22.392 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:15:22.392 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7570], 00:15:22.392 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[11207], 99.95th=[12125], 00:15:22.392 | 99.99th=[13566] 00:15:22.392 bw ( KiB/s): min=35520, max=37184, per=100.00%, avg=36186.00, stdev=740.95, samples=4 00:15:22.392 iops : min= 8880, max= 9296, avg=9046.50, stdev=185.24, samples=4 00:15:22.392 lat (msec) : 2=0.02%, 4=0.13%, 10=99.66%, 20=0.18% 00:15:22.392 cpu : usr=68.79%, sys=23.43%, ctx=30, majf=0, minf=7 00:15:22.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:22.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.392 issued rwts: total=18113,18151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.392 00:15:22.392 Run status group 0 (all jobs): 00:15:22.392 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2007-2007msec 00:15:22.392 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.9MiB (74.3MB), run=2007-2007msec 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:22.392 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:22.392 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:22.392 fio-3.35 00:15:22.392 Starting 1 thread 00:15:24.926 00:15:24.926 test: (groupid=0, jobs=1): err= 0: pid=75646: Tue Dec 10 14:20:49 2024 00:15:24.926 read: IOPS=8342, BW=130MiB/s (137MB/s)(262MiB/2009msec) 00:15:24.926 slat (usec): min=2, max=155, avg= 3.67, stdev= 2.70 00:15:24.926 clat (usec): min=1922, max=18542, avg=8743.64, stdev=2669.86 00:15:24.926 lat (usec): min=1925, max=18546, avg=8747.31, stdev=2669.90 00:15:24.926 clat percentiles (usec): 00:15:24.926 | 1.00th=[ 3982], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6325], 00:15:24.926 | 30.00th=[ 7111], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:15:24.926 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[13435], 00:15:24.926 | 99.00th=[16057], 99.50th=[16450], 99.90th=[17957], 99.95th=[18220], 00:15:24.926 | 99.99th=[18482] 00:15:24.926 bw ( KiB/s): min=63392, max=75072, per=51.38%, avg=68584.00, stdev=5842.51, samples=4 00:15:24.926 iops : min= 3962, max= 4692, avg=4286.50, stdev=365.16, samples=4 00:15:24.926 write: IOPS=4752, BW=74.3MiB/s (77.9MB/s)(139MiB/1877msec); 0 zone resets 00:15:24.926 slat (usec): min=31, max=357, avg=38.06, stdev= 9.95 00:15:24.926 clat (usec): min=2741, max=19567, avg=11747.87, stdev=2255.35 00:15:24.926 lat (usec): min=2774, max=19601, avg=11785.93, stdev=2255.46 00:15:24.926 clat percentiles (usec): 00:15:24.926 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:24.926 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:15:24.926 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14746], 95.00th=[15795], 00:15:24.926 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19530], 99.95th=[19530], 00:15:24.926 | 99.99th=[19530] 00:15:24.926 bw ( KiB/s): min=65728, max=77280, per=93.51%, avg=71112.00, stdev=6132.06, samples=4 00:15:24.926 iops : min= 4108, max= 4830, avg=4444.50, stdev=383.25, samples=4 00:15:24.926 lat (msec) : 2=0.01%, 4=0.70%, 10=52.06%, 20=47.22% 00:15:24.926 cpu : usr=84.06%, sys=11.70%, ctx=22, majf=0, minf=4 00:15:24.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:24.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.926 issued rwts: total=16761,8921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.926 00:15:24.926 Run status group 0 (all jobs): 00:15:24.926 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2009-2009msec 00:15:24.926 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=139MiB (146MB), run=1877-1877msec 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.926 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.926 rmmod nvme_tcp 00:15:25.186 rmmod nvme_fabrics 00:15:25.186 rmmod nvme_keyring 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75523 ']' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75523 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75523 ']' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75523 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75523 00:15:25.186 killing process with pid 75523 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75523' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75523 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75523 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:25.186 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:25.187 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:25.187 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:25.187 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:25.187 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:25.187 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:25.187 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:25.446 00:15:25.446 real 0m8.401s 00:15:25.446 user 0m33.495s 00:15:25.446 sys 0m2.342s 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.446 ************************************ 00:15:25.446 END TEST nvmf_fio_host 00:15:25.446 ************************************ 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.446 14:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.706 ************************************ 00:15:25.706 START TEST nvmf_failover 00:15:25.706 ************************************ 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.706 * Looking for test storage... 00:15:25.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.706 --rc genhtml_branch_coverage=1 00:15:25.706 --rc genhtml_function_coverage=1 00:15:25.706 --rc genhtml_legend=1 00:15:25.706 --rc geninfo_all_blocks=1 00:15:25.706 --rc geninfo_unexecuted_blocks=1 00:15:25.706 00:15:25.706 ' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.706 --rc genhtml_branch_coverage=1 00:15:25.706 --rc genhtml_function_coverage=1 00:15:25.706 --rc genhtml_legend=1 00:15:25.706 --rc geninfo_all_blocks=1 00:15:25.706 --rc geninfo_unexecuted_blocks=1 00:15:25.706 00:15:25.706 ' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.706 --rc genhtml_branch_coverage=1 00:15:25.706 --rc genhtml_function_coverage=1 00:15:25.706 --rc genhtml_legend=1 00:15:25.706 --rc geninfo_all_blocks=1 00:15:25.706 --rc geninfo_unexecuted_blocks=1 00:15:25.706 00:15:25.706 ' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.706 --rc genhtml_branch_coverage=1 00:15:25.706 --rc genhtml_function_coverage=1 00:15:25.706 --rc genhtml_legend=1 00:15:25.706 --rc geninfo_all_blocks=1 00:15:25.706 --rc geninfo_unexecuted_blocks=1 00:15:25.706 00:15:25.706 ' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.706 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:25.707 Cannot find device "nvmf_init_br" 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:25.707 Cannot find device "nvmf_init_br2" 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:25.707 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:25.966 Cannot find device "nvmf_tgt_br" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.966 Cannot find device "nvmf_tgt_br2" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:25.966 Cannot find device "nvmf_init_br" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:25.966 Cannot find device "nvmf_init_br2" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:25.966 Cannot find device "nvmf_tgt_br" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:25.966 Cannot find device "nvmf_tgt_br2" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:25.966 Cannot find device "nvmf_br" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:25.966 Cannot find device "nvmf_init_if" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:25.966 Cannot find device "nvmf_init_if2" 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.966 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:26.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:26.224 00:15:26.224 --- 10.0.0.3 ping statistics --- 00:15:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.224 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:26.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:26.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:26.224 00:15:26.224 --- 10.0.0.4 ping statistics --- 00:15:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.224 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:26.224 00:15:26.224 --- 10.0.0.1 ping statistics --- 00:15:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.224 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:26.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:26.224 00:15:26.224 --- 10.0.0.2 ping statistics --- 00:15:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.224 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75919 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75919 00:15:26.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75919 ']' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.224 14:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:26.224 [2024-12-10 14:20:50.944576] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:26.224 [2024-12-10 14:20:50.945374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.483 [2024-12-10 14:20:51.097745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.483 [2024-12-10 14:20:51.138249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.483 [2024-12-10 14:20:51.138579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.483 [2024-12-10 14:20:51.138745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.483 [2024-12-10 14:20:51.138913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.483 [2024-12-10 14:20:51.139050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.483 [2024-12-10 14:20:51.140080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.483 [2024-12-10 14:20:51.140178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.483 [2024-12-10 14:20:51.140185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.483 [2024-12-10 14:20:51.174634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.419 14:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.419 [2024-12-10 14:20:52.194935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.419 14:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.987 Malloc0 00:15:27.987 14:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.987 14:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.246 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:28.505 [2024-12-10 14:20:53.256426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.505 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:28.763 [2024-12-10 14:20:53.516578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:28.763 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:29.022 [2024-12-10 14:20:53.752771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75976 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75976 /var/tmp/bdevperf.sock 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75976 ']' 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.022 14:20:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:29.281 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.281 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:29.281 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:29.848 NVMe0n1 00:15:29.848 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:30.107 00:15:30.107 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75992 00:15:30.107 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.107 14:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:31.043 14:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:31.610 14:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:34.896 14:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:34.896 00:15:34.896 14:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:35.155 14:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:38.445 14:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:38.445 [2024-12-10 14:21:03.179383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.445 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:39.379 14:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:39.946 [2024-12-10 14:21:04.501876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986640 is same with the state(6) to be set 00:15:39.946 [2024-12-10 14:21:04.501912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986640 is same with the state(6) to be set 00:15:39.946 [2024-12-10 14:21:04.501921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986640 is same with the state(6) to be set 00:15:39.946 [2024-12-10 14:21:04.501930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986640 is same with the state(6) to be set 00:15:39.946 [2024-12-10 14:21:04.501938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986640 is same with the state(6) to be set 00:15:39.946 14:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75992 00:15:45.211 { 00:15:45.211 "results": [ 00:15:45.211 { 00:15:45.211 "job": "NVMe0n1", 00:15:45.211 "core_mask": "0x1", 00:15:45.211 "workload": "verify", 00:15:45.211 "status": "finished", 00:15:45.211 "verify_range": { 00:15:45.211 "start": 0, 00:15:45.211 "length": 16384 00:15:45.211 }, 00:15:45.211 "queue_depth": 128, 00:15:45.211 "io_size": 4096, 00:15:45.211 "runtime": 15.006879, 00:15:45.211 "iops": 8339.242290152402, 00:15:45.211 "mibps": 32.57516519590782, 00:15:45.211 "io_failed": 3141, 00:15:45.211 "io_timeout": 0, 00:15:45.211 "avg_latency_us": 14940.067850253374, 00:15:45.211 "min_latency_us": 677.7018181818182, 00:15:45.211 "max_latency_us": 18469.236363636363 00:15:45.211 } 00:15:45.211 ], 00:15:45.211 "core_count": 1 00:15:45.211 } 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75976 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75976 ']' 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75976 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.211 14:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75976 00:15:45.211 killing process with pid 75976 00:15:45.211 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.211 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.211 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75976' 00:15:45.211 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75976 00:15:45.211 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75976 00:15:45.477 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.477 [2024-12-10 14:20:53.821413] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:45.477 [2024-12-10 14:20:53.821524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75976 ] 00:15:45.477 [2024-12-10 14:20:53.962645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.477 [2024-12-10 14:20:53.996691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.477 [2024-12-10 14:20:54.027958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.477 Running I/O for 15 seconds... 00:15:45.477 6989.00 IOPS, 27.30 MiB/s [2024-12-10T14:21:10.314Z] [2024-12-10 14:20:56.134459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.477 [2024-12-10 14:20:56.134531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.477 [2024-12-10 14:20:56.134563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.477 [2024-12-10 14:20:56.134580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.477 [2024-12-10 14:20:56.134597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.477 [2024-12-10 14:20:56.134612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.477 [2024-12-10 14:20:56.134629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.477 [2024-12-10 14:20:56.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.477 [2024-12-10 14:20:56.134660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.477 [2024-12-10 14:20:56.134675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.134706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.134736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.134824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.134945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.135813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.135845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.135877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.135964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.135986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.136002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.136063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.478 [2024-12-10 14:20:56.136094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.136139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.136195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.136250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.136295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.478 [2024-12-10 14:20:56.136319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.478 [2024-12-10 14:20:56.136339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.136719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.136949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.136997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.479 [2024-12-10 14:20:56.137521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.479 [2024-12-10 14:20:56.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.479 [2024-12-10 14:20:56.137695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.480 [2024-12-10 14:20:56.137710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.480 [2024-12-10 14:20:56.137750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.480 [2024-12-10 14:20:56.137780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.137960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.137986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.480 [2024-12-10 14:20:56.138737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230fb00 is same with the state(6) to be set 00:15:45.480 [2024-12-10 14:20:56.138792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.138805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.138816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69152 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.138835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.138873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.138889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69416 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.138908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.138931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.138946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.138983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69424 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69432 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69440 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69448 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69456 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69464 len:8 PRP1 0x0 PRP2 0x0 00:15:45.480 [2024-12-10 14:20:56.139281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.480 [2024-12-10 14:20:56.139296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.480 [2024-12-10 14:20:56.139306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.480 [2024-12-10 14:20:56.139317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69472 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69480 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69488 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69496 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69504 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69512 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69520 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69528 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.481 [2024-12-10 14:20:56.139729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.481 [2024-12-10 14:20:56.139739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69536 len:8 PRP1 0x0 PRP2 0x0 00:15:45.481 [2024-12-10 14:20:56.139751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139803] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:45.481 [2024-12-10 14:20:56.139864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.481 [2024-12-10 14:20:56.139902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.481 [2024-12-10 14:20:56.139936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.481 [2024-12-10 14:20:56.139981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.139996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.481 [2024-12-10 14:20:56.140024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:56.140042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:45.481 [2024-12-10 14:20:56.144154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:45.481 [2024-12-10 14:20:56.144200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ec60 (9): Bad file descriptor 00:15:45.481 [2024-12-10 14:20:56.172957] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:45.481 7730.00 IOPS, 30.20 MiB/s [2024-12-10T14:21:10.318Z] 8105.33 IOPS, 31.66 MiB/s [2024-12-10T14:21:10.318Z] 8297.00 IOPS, 32.41 MiB/s [2024-12-10T14:21:10.318Z] [2024-12-10 14:20:59.860106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.481 [2024-12-10 14:20:59.860455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.481 [2024-12-10 14:20:59.860821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.481 [2024-12-10 14:20:59.860835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.860850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.860864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.860879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.860893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.860908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.860921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.860936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.860950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.482 [2024-12-10 14:20:59.861808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.861940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.861985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.862004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.862018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.482 [2024-12-10 14:20:59.862043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.482 [2024-12-10 14:20:59.862058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.862820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.862955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.862991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.863108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.483 [2024-12-10 14:20:59.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.483 [2024-12-10 14:20:59.863410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.483 [2024-12-10 14:20:59.863423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.863937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.863981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.863994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.484 [2024-12-10 14:20:59.864203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.484 [2024-12-10 14:20:59.864403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244d450 is same with the state(6) to be set 00:15:45.484 [2024-12-10 14:20:59.864435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.484 [2024-12-10 14:20:59.864447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.484 [2024-12-10 14:20:59.864457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:15:45.484 [2024-12-10 14:20:59.864470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864523] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:45.484 [2024-12-10 14:20:59.864591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.484 [2024-12-10 14:20:59.864614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.484 [2024-12-10 14:20:59.864644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.484 [2024-12-10 14:20:59.864675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.484 [2024-12-10 14:20:59.864701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.484 [2024-12-10 14:20:59.864715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:45.484 [2024-12-10 14:20:59.864768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ec60 (9): Bad file descriptor 00:15:45.484 [2024-12-10 14:20:59.868563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:45.484 [2024-12-10 14:20:59.891705] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:45.484 8394.40 IOPS, 32.79 MiB/s [2024-12-10T14:21:10.321Z] 8561.33 IOPS, 33.44 MiB/s [2024-12-10T14:21:10.321Z] 8654.86 IOPS, 33.81 MiB/s [2024-12-10T14:21:10.321Z] 8711.00 IOPS, 34.03 MiB/s [2024-12-10T14:21:10.321Z] 8765.33 IOPS, 34.24 MiB/s [2024-12-10T14:21:10.321Z] [2024-12-10 14:21:04.501764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.485 [2024-12-10 14:21:04.501825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.501844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.485 [2024-12-10 14:21:04.501858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.501871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.485 [2024-12-10 14:21:04.501883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.501896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.485 [2024-12-10 14:21:04.501908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.501921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ec60 is same with the state(6) to be set 00:15:45.485 [2024-12-10 14:21:04.502906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.502945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.502983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.502997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.485 [2024-12-10 14:21:04.503947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.503989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.485 [2024-12-10 14:21:04.504157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.485 [2024-12-10 14:21:04.504171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.504450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.504960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.504991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.486 [2024-12-10 14:21:04.505196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.505224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.505252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.486 [2024-12-10 14:21:04.505280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.486 [2024-12-10 14:21:04.505295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.505688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.505968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.506031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.487 [2024-12-10 14:21:04.506082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.487 [2024-12-10 14:21:04.506302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f6d0 is same with the state(6) to be set 00:15:45.487 [2024-12-10 14:21:04.506332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.487 [2024-12-10 14:21:04.506357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.487 [2024-12-10 14:21:04.506368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33768 len:8 PRP1 0x0 PRP2 0x0 00:15:45.487 [2024-12-10 14:21:04.506381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.487 [2024-12-10 14:21:04.506404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.487 [2024-12-10 14:21:04.506414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34192 len:8 PRP1 0x0 PRP2 0x0 00:15:45.487 [2024-12-10 14:21:04.506441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.487 [2024-12-10 14:21:04.506463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.487 [2024-12-10 14:21:04.506476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34200 len:8 PRP1 0x0 PRP2 0x0 00:15:45.487 [2024-12-10 14:21:04.506489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.487 [2024-12-10 14:21:04.506511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.487 [2024-12-10 14:21:04.506521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 PRP1 0x0 PRP2 0x0 00:15:45.487 [2024-12-10 14:21:04.506533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.487 [2024-12-10 14:21:04.506554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.487 [2024-12-10 14:21:04.506563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:15:45.487 [2024-12-10 14:21:04.506575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.487 [2024-12-10 14:21:04.506587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.506945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.506987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.506998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.488 [2024-12-10 14:21:04.507509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.488 [2024-12-10 14:21:04.507519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:15:45.488 [2024-12-10 14:21:04.507532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.488 [2024-12-10 14:21:04.507580] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:45.488 [2024-12-10 14:21:04.507598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:45.488 [2024-12-10 14:21:04.511617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:45.488 [2024-12-10 14:21:04.511654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ec60 (9): Bad file descriptor 00:15:45.488 [2024-12-10 14:21:04.536187] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:45.488 8681.50 IOPS, 33.91 MiB/s [2024-12-10T14:21:10.325Z] 8520.64 IOPS, 33.28 MiB/s [2024-12-10T14:21:10.325Z] 8407.92 IOPS, 32.84 MiB/s [2024-12-10T14:21:10.325Z] 8322.38 IOPS, 32.51 MiB/s [2024-12-10T14:21:10.325Z] 8294.43 IOPS, 32.40 MiB/s [2024-12-10T14:21:10.325Z] 8337.73 IOPS, 32.57 MiB/s 00:15:45.488 Latency(us) 00:15:45.488 [2024-12-10T14:21:10.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.488 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.488 Verification LBA range: start 0x0 length 0x4000 00:15:45.488 NVMe0n1 : 15.01 8339.24 32.58 209.30 0.00 14940.07 677.70 18469.24 00:15:45.488 [2024-12-10T14:21:10.325Z] =================================================================================================================== 00:15:45.488 [2024-12-10T14:21:10.325Z] Total : 8339.24 32.58 209.30 0.00 14940.07 677.70 18469.24 00:15:45.488 Received shutdown signal, test time was about 15.000000 seconds 00:15:45.488 00:15:45.488 Latency(us) 00:15:45.488 [2024-12-10T14:21:10.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.488 [2024-12-10T14:21:10.325Z] =================================================================================================================== 00:15:45.488 [2024-12-10T14:21:10.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76167 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:45.488 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76167 /var/tmp/bdevperf.sock 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76167 ']' 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.489 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:45.748 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.748 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:45.748 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:46.007 [2024-12-10 14:21:10.757968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:46.007 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:46.266 [2024-12-10 14:21:11.006166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:46.266 14:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:46.833 NVMe0n1 00:15:46.833 14:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.092 00:15:47.092 14:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.351 00:15:47.351 14:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:47.351 14:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:47.609 14:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.868 14:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:51.185 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.185 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:51.185 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76242 00:15:51.185 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.185 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76242 00:15:52.563 { 00:15:52.563 "results": [ 00:15:52.563 { 00:15:52.563 "job": "NVMe0n1", 00:15:52.563 "core_mask": "0x1", 00:15:52.563 "workload": "verify", 00:15:52.563 "status": "finished", 00:15:52.563 "verify_range": { 00:15:52.563 "start": 0, 00:15:52.563 "length": 16384 00:15:52.563 }, 00:15:52.563 "queue_depth": 128, 00:15:52.563 "io_size": 4096, 00:15:52.563 "runtime": 1.015346, 00:15:52.563 "iops": 6701.163938204317, 00:15:52.563 "mibps": 26.176421633610612, 00:15:52.563 "io_failed": 0, 00:15:52.563 "io_timeout": 0, 00:15:52.563 "avg_latency_us": 19022.949409438297, 00:15:52.563 "min_latency_us": 2115.0254545454545, 00:15:52.563 "max_latency_us": 16086.10909090909 00:15:52.563 } 00:15:52.563 ], 00:15:52.563 "core_count": 1 00:15:52.563 } 00:15:52.563 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:52.563 [2024-12-10 14:21:10.210988] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:52.563 [2024-12-10 14:21:10.211124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76167 ] 00:15:52.563 [2024-12-10 14:21:10.355589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.563 [2024-12-10 14:21:10.388930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.563 [2024-12-10 14:21:10.417475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.563 [2024-12-10 14:21:12.625039] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:52.563 [2024-12-10 14:21:12.625185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.563 [2024-12-10 14:21:12.625209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.563 [2024-12-10 14:21:12.625227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.563 [2024-12-10 14:21:12.625240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.563 [2024-12-10 14:21:12.625253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.563 [2024-12-10 14:21:12.625265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.563 [2024-12-10 14:21:12.625278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.563 [2024-12-10 14:21:12.625291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.563 [2024-12-10 14:21:12.625303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:52.563 [2024-12-10 14:21:12.625353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:52.563 [2024-12-10 14:21:12.625397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810c60 (9): Bad file descriptor 00:15:52.563 [2024-12-10 14:21:12.629749] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:52.563 Running I/O for 1 seconds... 00:15:52.563 6676.00 IOPS, 26.08 MiB/s 00:15:52.563 Latency(us) 00:15:52.563 [2024-12-10T14:21:17.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.563 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.563 Verification LBA range: start 0x0 length 0x4000 00:15:52.563 NVMe0n1 : 1.02 6701.16 26.18 0.00 0.00 19022.95 2115.03 16086.11 00:15:52.563 [2024-12-10T14:21:17.400Z] =================================================================================================================== 00:15:52.563 [2024-12-10T14:21:17.400Z] Total : 6701.16 26.18 0.00 0.00 19022.95 2115.03 16086.11 00:15:52.563 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.563 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:52.563 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.822 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.822 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:53.081 14:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.340 14:21:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:56.628 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.628 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76167 ']' 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.887 killing process with pid 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76167' 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76167 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:56.887 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.147 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.147 rmmod nvme_tcp 00:15:57.406 rmmod nvme_fabrics 00:15:57.406 rmmod nvme_keyring 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75919 ']' 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75919 ']' 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75919' 00:15:57.406 killing process with pid 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75919 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.406 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:57.665 00:15:57.665 real 0m32.157s 00:15:57.665 user 2m3.653s 00:15:57.665 sys 0m5.942s 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:57.665 ************************************ 00:15:57.665 END TEST nvmf_failover 00:15:57.665 ************************************ 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.665 14:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.925 ************************************ 00:15:57.925 START TEST nvmf_host_discovery 00:15:57.925 ************************************ 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.925 * Looking for test storage... 00:15:57.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:57.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.925 --rc genhtml_branch_coverage=1 00:15:57.925 --rc genhtml_function_coverage=1 00:15:57.925 --rc genhtml_legend=1 00:15:57.925 --rc geninfo_all_blocks=1 00:15:57.925 --rc geninfo_unexecuted_blocks=1 00:15:57.925 00:15:57.925 ' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:57.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.925 --rc genhtml_branch_coverage=1 00:15:57.925 --rc genhtml_function_coverage=1 00:15:57.925 --rc genhtml_legend=1 00:15:57.925 --rc geninfo_all_blocks=1 00:15:57.925 --rc geninfo_unexecuted_blocks=1 00:15:57.925 00:15:57.925 ' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:57.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.925 --rc genhtml_branch_coverage=1 00:15:57.925 --rc genhtml_function_coverage=1 00:15:57.925 --rc genhtml_legend=1 00:15:57.925 --rc geninfo_all_blocks=1 00:15:57.925 --rc geninfo_unexecuted_blocks=1 00:15:57.925 00:15:57.925 ' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:57.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.925 --rc genhtml_branch_coverage=1 00:15:57.925 --rc genhtml_function_coverage=1 00:15:57.925 --rc genhtml_legend=1 00:15:57.925 --rc geninfo_all_blocks=1 00:15:57.925 --rc geninfo_unexecuted_blocks=1 00:15:57.925 00:15:57.925 ' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.925 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.926 Cannot find device "nvmf_init_br" 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.926 Cannot find device "nvmf_init_br2" 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:57.926 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.184 Cannot find device "nvmf_tgt_br" 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.184 Cannot find device "nvmf_tgt_br2" 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.184 Cannot find device "nvmf_init_br" 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:58.184 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.184 Cannot find device "nvmf_init_br2" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.185 Cannot find device "nvmf_tgt_br" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.185 Cannot find device "nvmf_tgt_br2" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.185 Cannot find device "nvmf_br" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.185 Cannot find device "nvmf_init_if" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.185 Cannot find device "nvmf_init_if2" 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.185 14:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.185 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.185 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:58.444 00:15:58.444 --- 10.0.0.3 ping statistics --- 00:15:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.444 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:15:58.444 00:15:58.444 --- 10.0.0.4 ping statistics --- 00:15:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.444 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:58.444 00:15:58.444 --- 10.0.0.1 ping statistics --- 00:15:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.444 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:58.444 00:15:58.444 --- 10.0.0.2 ping statistics --- 00:15:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.444 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.444 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76571 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76571 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76571 ']' 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.445 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.445 [2024-12-10 14:21:23.185464] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:58.445 [2024-12-10 14:21:23.186191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.704 [2024-12-10 14:21:23.345744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.704 [2024-12-10 14:21:23.382791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.704 [2024-12-10 14:21:23.382868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.704 [2024-12-10 14:21:23.382891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.704 [2024-12-10 14:21:23.382901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.704 [2024-12-10 14:21:23.382909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.704 [2024-12-10 14:21:23.383353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.704 [2024-12-10 14:21:23.415880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.704 [2024-12-10 14:21:23.514233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.704 [2024-12-10 14:21:23.522383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.704 null0 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.704 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.963 null1 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76591 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76591 /tmp/host.sock 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76591 ']' 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.963 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.963 14:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.963 [2024-12-10 14:21:23.615165] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:15:58.963 [2024-12-10 14:21:23.615256] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76591 ] 00:15:58.963 [2024-12-10 14:21:23.764889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.963 [2024-12-10 14:21:23.797276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.222 [2024-12-10 14:21:23.825990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.789 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 [2024-12-10 14:21:24.986643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.308 14:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.308 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:00.568 14:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:00.826 [2024-12-10 14:21:25.617242] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:00.826 [2024-12-10 14:21:25.617290] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:00.826 [2024-12-10 14:21:25.617327] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:00.826 [2024-12-10 14:21:25.623289] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:01.085 [2024-12-10 14:21:25.677795] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:01.085 [2024-12-10 14:21:25.678781] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b7da0:1 started. 00:16:01.085 [2024-12-10 14:21:25.680556] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:01.085 [2024-12-10 14:21:25.680583] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:01.085 [2024-12-10 14:21:25.685865] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b7da0 was disconnected and freed. delete nvme_qpair. 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.653 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:01.913 [2024-12-10 14:21:26.489644] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21c6190:1 started. 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.913 [2024-12-10 14:21:26.496558] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21c6190 was disconnected and freed. delete nvme_qpair. 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.913 [2024-12-10 14:21:26.600698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:01.913 [2024-12-10 14:21:26.601839] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:01.913 [2024-12-10 14:21:26.601877] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.913 [2024-12-10 14:21:26.607853] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.913 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.914 [2024-12-10 14:21:26.671650] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:01.914 [2024-12-10 14:21:26.671705] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:01.914 [2024-12-10 14:21:26.671717] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:01.914 [2024-12-10 14:21:26.671723] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:01.914 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.174 [2024-12-10 14:21:26.809193] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.174 [2024-12-10 14:21:26.809227] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.174 [2024-12-10 14:21:26.815197] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:02.174 [2024-12-10 14:21:26.815223] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.174 [2024-12-10 14:21:26.815324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.174 [2024-12-10 14:21:26.815370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.174 [2024-12-10 14:21:26.815384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.174 [2024-12-10 14:21:26.815394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.174 [2024-12-10 14:21:26.815404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.174 [2024-12-10 14:21:26.815413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.174 [2024-12-10 14:21:26.815424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.174 [2024-12-10 14:21:26.815434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.174 [2024-12-10 14:21:26.815443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2193fb0 is same with the state(6) to be set 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.174 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.175 14:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:02.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.435 14:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.819 [2024-12-10 14:21:28.228018] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:03.819 [2024-12-10 14:21:28.228051] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:03.819 [2024-12-10 14:21:28.228084] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:03.819 [2024-12-10 14:21:28.234046] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:03.819 [2024-12-10 14:21:28.292502] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:03.819 [2024-12-10 14:21:28.293150] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x21c3210:1 started. 00:16:03.819 [2024-12-10 14:21:28.295020] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:03.819 [2024-12-10 14:21:28.295099] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:03.819 [2024-12-10 14:21:28.296868] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x21c3210 was disconnected and freed. delete nvme_qpair. 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.819 request: 00:16:03.819 { 00:16:03.819 "name": "nvme", 00:16:03.819 "trtype": "tcp", 00:16:03.819 "traddr": "10.0.0.3", 00:16:03.819 "adrfam": "ipv4", 00:16:03.819 "trsvcid": "8009", 00:16:03.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:03.819 "wait_for_attach": true, 00:16:03.819 "method": "bdev_nvme_start_discovery", 00:16:03.819 "req_id": 1 00:16:03.819 } 00:16:03.819 Got JSON-RPC error response 00:16:03.819 response: 00:16:03.819 { 00:16:03.819 "code": -17, 00:16:03.819 "message": "File exists" 00:16:03.819 } 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.819 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.820 request: 00:16:03.820 { 00:16:03.820 "name": "nvme_second", 00:16:03.820 "trtype": "tcp", 00:16:03.820 "traddr": "10.0.0.3", 00:16:03.820 "adrfam": "ipv4", 00:16:03.820 "trsvcid": "8009", 00:16:03.820 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:03.820 "wait_for_attach": true, 00:16:03.820 "method": "bdev_nvme_start_discovery", 00:16:03.820 "req_id": 1 00:16:03.820 } 00:16:03.820 Got JSON-RPC error response 00:16:03.820 response: 00:16:03.820 { 00:16:03.820 "code": -17, 00:16:03.820 "message": "File exists" 00:16:03.820 } 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.820 14:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.756 [2024-12-10 14:21:29.555431] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:04.756 [2024-12-10 14:21:29.555511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2192fb0 with addr=10.0.0.3, port=8010 00:16:04.756 [2024-12-10 14:21:29.555531] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:04.756 [2024-12-10 14:21:29.555540] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:04.756 [2024-12-10 14:21:29.555548] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:06.134 [2024-12-10 14:21:30.555435] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:06.134 [2024-12-10 14:21:30.555770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2192fb0 with addr=10.0.0.3, port=8010 00:16:06.134 [2024-12-10 14:21:30.555803] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:06.134 [2024-12-10 14:21:30.555815] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:06.134 [2024-12-10 14:21:30.555826] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:07.071 [2024-12-10 14:21:31.555260] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:07.071 request: 00:16:07.071 { 00:16:07.071 "name": "nvme_second", 00:16:07.071 "trtype": "tcp", 00:16:07.071 "traddr": "10.0.0.3", 00:16:07.071 "adrfam": "ipv4", 00:16:07.071 "trsvcid": "8010", 00:16:07.071 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.071 "wait_for_attach": false, 00:16:07.071 "attach_timeout_ms": 3000, 00:16:07.071 "method": "bdev_nvme_start_discovery", 00:16:07.071 "req_id": 1 00:16:07.071 } 00:16:07.071 Got JSON-RPC error response 00:16:07.071 response: 00:16:07.071 { 00:16:07.071 "code": -110, 00:16:07.071 "message": "Connection timed out" 00:16:07.071 } 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76591 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.071 rmmod nvme_tcp 00:16:07.071 rmmod nvme_fabrics 00:16:07.071 rmmod nvme_keyring 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76571 ']' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76571 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76571 ']' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76571 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76571 00:16:07.071 killing process with pid 76571 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76571' 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76571 00:16:07.071 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76571 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:07.329 14:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.329 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:07.587 ************************************ 00:16:07.587 END TEST nvmf_host_discovery 00:16:07.587 ************************************ 00:16:07.587 00:16:07.587 real 0m9.681s 00:16:07.587 user 0m18.644s 00:16:07.587 sys 0m1.898s 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.587 ************************************ 00:16:07.587 START TEST nvmf_host_multipath_status 00:16:07.587 ************************************ 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:07.587 * Looking for test storage... 00:16:07.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:07.587 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.846 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:07.846 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:07.846 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.846 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.847 --rc genhtml_branch_coverage=1 00:16:07.847 --rc genhtml_function_coverage=1 00:16:07.847 --rc genhtml_legend=1 00:16:07.847 --rc geninfo_all_blocks=1 00:16:07.847 --rc geninfo_unexecuted_blocks=1 00:16:07.847 00:16:07.847 ' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.847 --rc genhtml_branch_coverage=1 00:16:07.847 --rc genhtml_function_coverage=1 00:16:07.847 --rc genhtml_legend=1 00:16:07.847 --rc geninfo_all_blocks=1 00:16:07.847 --rc geninfo_unexecuted_blocks=1 00:16:07.847 00:16:07.847 ' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.847 --rc genhtml_branch_coverage=1 00:16:07.847 --rc genhtml_function_coverage=1 00:16:07.847 --rc genhtml_legend=1 00:16:07.847 --rc geninfo_all_blocks=1 00:16:07.847 --rc geninfo_unexecuted_blocks=1 00:16:07.847 00:16:07.847 ' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.847 --rc genhtml_branch_coverage=1 00:16:07.847 --rc genhtml_function_coverage=1 00:16:07.847 --rc genhtml_legend=1 00:16:07.847 --rc geninfo_all_blocks=1 00:16:07.847 --rc geninfo_unexecuted_blocks=1 00:16:07.847 00:16:07.847 ' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.847 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:07.847 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:07.848 Cannot find device "nvmf_init_br" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:07.848 Cannot find device "nvmf_init_br2" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:07.848 Cannot find device "nvmf_tgt_br" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.848 Cannot find device "nvmf_tgt_br2" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:07.848 Cannot find device "nvmf_init_br" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:07.848 Cannot find device "nvmf_init_br2" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:07.848 Cannot find device "nvmf_tgt_br" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:07.848 Cannot find device "nvmf_tgt_br2" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:07.848 Cannot find device "nvmf_br" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:07.848 Cannot find device "nvmf_init_if" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:07.848 Cannot find device "nvmf_init_if2" 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.848 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:08.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:08.108 00:16:08.108 --- 10.0.0.3 ping statistics --- 00:16:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.108 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:08.108 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:08.108 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:16:08.108 00:16:08.108 --- 10.0.0.4 ping statistics --- 00:16:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.108 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:08.108 00:16:08.108 --- 10.0.0.1 ping statistics --- 00:16:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.108 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:08.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:08.108 00:16:08.108 --- 10.0.0.2 ping statistics --- 00:16:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.108 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=77106 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 77106 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77106 ']' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.108 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.109 14:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.368 [2024-12-10 14:21:32.961631] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:16:08.368 [2024-12-10 14:21:32.962346] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.368 [2024-12-10 14:21:33.116499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.368 [2024-12-10 14:21:33.155564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.368 [2024-12-10 14:21:33.155627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.368 [2024-12-10 14:21:33.155641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.368 [2024-12-10 14:21:33.155652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.368 [2024-12-10 14:21:33.155660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.368 [2024-12-10 14:21:33.156604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.368 [2024-12-10 14:21:33.156621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.368 [2024-12-10 14:21:33.190707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77106 00:16:08.626 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:08.884 [2024-12-10 14:21:33.575672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.884 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:09.143 Malloc0 00:16:09.143 14:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:09.709 14:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.968 14:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:10.227 [2024-12-10 14:21:34.897600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.227 14:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:10.492 [2024-12-10 14:21:35.177801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77154 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77154 /var/tmp/bdevperf.sock 00:16:10.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77154 ']' 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.492 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:10.760 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.760 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:10.760 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:11.019 14:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:11.587 Nvme0n1 00:16:11.587 14:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:11.847 Nvme0n1 00:16:11.847 14:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:11.847 14:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:13.751 14:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:13.751 14:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:14.319 14:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:14.577 14:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:15.513 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:15.513 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:15.513 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.513 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:15.772 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.772 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:15.772 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.772 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.339 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:16.339 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:16.339 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:16.339 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.598 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.598 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:16.598 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.598 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:16.857 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.857 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:16.857 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:16.857 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.116 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.116 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.116 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.116 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.375 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.375 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:17.375 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:17.942 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:18.201 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:19.139 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:19.139 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:19.139 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.139 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:19.398 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.398 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:19.398 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.398 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.657 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.657 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.657 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.657 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.224 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.224 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.224 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.224 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.225 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.225 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.225 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.225 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.790 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.790 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:20.790 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.790 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.048 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.048 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:21.048 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:21.307 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:21.899 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:22.835 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:22.835 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:22.835 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.835 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.094 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.094 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.094 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.094 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.352 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.352 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.352 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.352 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.610 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.610 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.610 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.610 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:23.868 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.868 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:23.868 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.868 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.126 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.126 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:24.126 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.126 14:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.384 14:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.384 14:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:24.385 14:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:24.951 14:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:24.951 14:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:26.325 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:26.325 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:26.325 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.325 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:26.325 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.325 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:26.325 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.325 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.581 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.581 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.581 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.581 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:26.840 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.840 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:26.840 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.840 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.098 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.098 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:27.098 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.098 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.355 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.355 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:27.355 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.355 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.612 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.612 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:27.612 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:27.870 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:28.128 14:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:29.502 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:29.502 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:29.502 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.502 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.502 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.502 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:29.502 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:29.502 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.761 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.761 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:29.761 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.761 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:30.018 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.018 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:30.018 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.018 14:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.277 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.277 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:30.277 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.277 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.535 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.535 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:30.535 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.535 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:30.793 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.793 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:30.793 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:31.051 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:31.309 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.713 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.971 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.971 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.971 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.971 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:33.537 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.537 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.537 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.537 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.795 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.795 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:33.795 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.795 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.054 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.054 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:34.054 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.054 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.313 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.313 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:34.880 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:34.880 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:34.880 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:35.448 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:36.385 14:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:36.385 14:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:36.385 14:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.385 14:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.644 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.644 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.644 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.644 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.902 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.902 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.902 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.902 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:37.160 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.160 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:37.160 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.160 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.419 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.419 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:37.419 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.419 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.679 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.679 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:37.679 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.679 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.937 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.937 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:37.937 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:38.196 14:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:38.456 14:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.834 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:40.093 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.093 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:40.093 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.093 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.352 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.352 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.352 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.352 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.610 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.610 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:40.610 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.610 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.881 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.881 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.881 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.881 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.140 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.140 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:41.140 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:41.708 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:41.708 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.085 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:43.344 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.344 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:43.344 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.344 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.603 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.603 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.603 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:43.603 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.861 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.861 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:43.861 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.861 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.119 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.120 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.120 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.120 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.687 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.687 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:44.687 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:44.687 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:44.946 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:46.322 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:46.322 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:46.322 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.322 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.322 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.322 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:46.322 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.322 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.579 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.579 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.579 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.579 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:46.837 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.837 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:46.837 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.837 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.095 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.095 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.095 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.095 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.354 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.354 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:47.354 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.354 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77154 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77154 ']' 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77154 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77154 00:16:47.613 killing process with pid 77154 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77154' 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77154 00:16:47.613 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77154 00:16:47.875 { 00:16:47.875 "results": [ 00:16:47.875 { 00:16:47.875 "job": "Nvme0n1", 00:16:47.875 "core_mask": "0x4", 00:16:47.875 "workload": "verify", 00:16:47.875 "status": "terminated", 00:16:47.875 "verify_range": { 00:16:47.875 "start": 0, 00:16:47.875 "length": 16384 00:16:47.875 }, 00:16:47.875 "queue_depth": 128, 00:16:47.875 "io_size": 4096, 00:16:47.875 "runtime": 35.796343, 00:16:47.875 "iops": 8698.067285811849, 00:16:47.875 "mibps": 33.976825335202534, 00:16:47.875 "io_failed": 0, 00:16:47.875 "io_timeout": 0, 00:16:47.875 "avg_latency_us": 14684.653590298716, 00:16:47.875 "min_latency_us": 729.8327272727273, 00:16:47.875 "max_latency_us": 4026531.84 00:16:47.875 } 00:16:47.875 ], 00:16:47.875 "core_count": 1 00:16:47.875 } 00:16:47.875 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77154 00:16:47.875 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.875 [2024-12-10 14:21:35.253385] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:16:47.876 [2024-12-10 14:21:35.253508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77154 ] 00:16:47.876 [2024-12-10 14:21:35.403559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.876 [2024-12-10 14:21:35.444379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.876 [2024-12-10 14:21:35.478516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.876 Running I/O for 90 seconds... 00:16:47.876 7445.00 IOPS, 29.08 MiB/s [2024-12-10T14:22:12.713Z] 7370.50 IOPS, 28.79 MiB/s [2024-12-10T14:22:12.713Z] 7303.00 IOPS, 28.53 MiB/s [2024-12-10T14:22:12.713Z] 7269.25 IOPS, 28.40 MiB/s [2024-12-10T14:22:12.713Z] 7500.80 IOPS, 29.30 MiB/s [2024-12-10T14:22:12.713Z] 7820.00 IOPS, 30.55 MiB/s [2024-12-10T14:22:12.713Z] 8076.57 IOPS, 31.55 MiB/s [2024-12-10T14:22:12.713Z] 8288.00 IOPS, 32.38 MiB/s [2024-12-10T14:22:12.713Z] 8357.22 IOPS, 32.65 MiB/s [2024-12-10T14:22:12.713Z] 8443.60 IOPS, 32.98 MiB/s [2024-12-10T14:22:12.713Z] 8604.18 IOPS, 33.61 MiB/s [2024-12-10T14:22:12.713Z] 8705.08 IOPS, 34.00 MiB/s [2024-12-10T14:22:12.713Z] 8799.08 IOPS, 34.37 MiB/s [2024-12-10T14:22:12.713Z] 8890.29 IOPS, 34.73 MiB/s [2024-12-10T14:22:12.713Z] 8973.53 IOPS, 35.05 MiB/s [2024-12-10T14:22:12.713Z] [2024-12-10 14:21:52.600171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.600512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.600938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.600964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.601008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.601044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.876 [2024-12-10 14:21:52.601077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.876 [2024-12-10 14:21:52.601320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.876 [2024-12-10 14:21:52.601334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.877 [2024-12-10 14:21:52.601917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.601978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.601997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.877 [2024-12-10 14:21:52.602365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.877 [2024-12-10 14:21:52.602378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.878 [2024-12-10 14:21:52.602410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.878 [2024-12-10 14:21:52.602458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.878 [2024-12-10 14:21:52.602489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.878 [2024-12-10 14:21:52.602520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.602951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.602980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.878 [2024-12-10 14:21:52.603406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.878 [2024-12-10 14:21:52.603420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.603705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.603920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.603933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.879 [2024-12-10 14:21:52.604630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.604973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.879 [2024-12-10 14:21:52.605227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.879 [2024-12-10 14:21:52.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:21:52.605651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:21:52.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.880 8973.12 IOPS, 35.05 MiB/s [2024-12-10T14:22:12.717Z] 8445.29 IOPS, 32.99 MiB/s [2024-12-10T14:22:12.717Z] 7976.11 IOPS, 31.16 MiB/s [2024-12-10T14:22:12.717Z] 7556.32 IOPS, 29.52 MiB/s [2024-12-10T14:22:12.717Z] 7201.00 IOPS, 28.13 MiB/s [2024-12-10T14:22:12.717Z] 7287.19 IOPS, 28.47 MiB/s [2024-12-10T14:22:12.717Z] 7347.36 IOPS, 28.70 MiB/s [2024-12-10T14:22:12.717Z] 7428.91 IOPS, 29.02 MiB/s [2024-12-10T14:22:12.717Z] 7645.62 IOPS, 29.87 MiB/s [2024-12-10T14:22:12.717Z] 7837.20 IOPS, 30.61 MiB/s [2024-12-10T14:22:12.717Z] 8009.08 IOPS, 31.29 MiB/s [2024-12-10T14:22:12.717Z] 8088.48 IOPS, 31.60 MiB/s [2024-12-10T14:22:12.717Z] 8121.46 IOPS, 31.72 MiB/s [2024-12-10T14:22:12.717Z] 8156.72 IOPS, 31.86 MiB/s [2024-12-10T14:22:12.717Z] 8214.90 IOPS, 32.09 MiB/s [2024-12-10T14:22:12.717Z] 8359.55 IOPS, 32.65 MiB/s [2024-12-10T14:22:12.717Z] 8498.00 IOPS, 33.20 MiB/s [2024-12-10T14:22:12.717Z] 8620.03 IOPS, 33.67 MiB/s [2024-12-10T14:22:12.717Z] [2024-12-10 14:22:09.738689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.738744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.738821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.738842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.738865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.738880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.738936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.738963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.880 [2024-12-10 14:22:09.739568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.880 [2024-12-10 14:22:09.739605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.880 [2024-12-10 14:22:09.739627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.739753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.739790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.739970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.739985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.881 [2024-12-10 14:22:09.740595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.881 [2024-12-10 14:22:09.740740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.881 [2024-12-10 14:22:09.740768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.740808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.740859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.740881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.740896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.740918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.740966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.740985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.741097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.741134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.741326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.741386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.741447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.742847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.742877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.742906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.742923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.742945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.742978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.882 [2024-12-10 14:22:09.743378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.882 [2024-12-10 14:22:09.743450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.882 [2024-12-10 14:22:09.743464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.882 8658.09 IOPS, 33.82 MiB/s [2024-12-10T14:22:12.719Z] 8683.40 IOPS, 33.92 MiB/s [2024-12-10T14:22:12.719Z] Received shutdown signal, test time was about 35.797202 seconds 00:16:47.882 00:16:47.882 Latency(us) 00:16:47.882 [2024-12-10T14:22:12.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.882 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:47.882 Verification LBA range: start 0x0 length 0x4000 00:16:47.882 Nvme0n1 : 35.80 8698.07 33.98 0.00 0.00 14684.65 729.83 4026531.84 00:16:47.882 [2024-12-10T14:22:12.719Z] =================================================================================================================== 00:16:47.882 [2024-12-10T14:22:12.719Z] Total : 8698.07 33.98 0.00 0.00 14684.65 729.83 4026531.84 00:16:47.882 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.141 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.141 rmmod nvme_tcp 00:16:48.141 rmmod nvme_fabrics 00:16:48.142 rmmod nvme_keyring 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 77106 ']' 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 77106 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77106 ']' 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77106 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77106 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.142 killing process with pid 77106 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77106' 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77106 00:16:48.142 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77106 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:48.400 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:48.659 00:16:48.659 real 0m41.109s 00:16:48.659 user 2m14.411s 00:16:48.659 sys 0m11.654s 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.659 ************************************ 00:16:48.659 END TEST nvmf_host_multipath_status 00:16:48.659 ************************************ 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.659 ************************************ 00:16:48.659 START TEST nvmf_discovery_remove_ifc 00:16:48.659 ************************************ 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:48.659 * Looking for test storage... 00:16:48.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:48.659 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.918 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:48.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.919 --rc genhtml_branch_coverage=1 00:16:48.919 --rc genhtml_function_coverage=1 00:16:48.919 --rc genhtml_legend=1 00:16:48.919 --rc geninfo_all_blocks=1 00:16:48.919 --rc geninfo_unexecuted_blocks=1 00:16:48.919 00:16:48.919 ' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:48.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.919 --rc genhtml_branch_coverage=1 00:16:48.919 --rc genhtml_function_coverage=1 00:16:48.919 --rc genhtml_legend=1 00:16:48.919 --rc geninfo_all_blocks=1 00:16:48.919 --rc geninfo_unexecuted_blocks=1 00:16:48.919 00:16:48.919 ' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:48.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.919 --rc genhtml_branch_coverage=1 00:16:48.919 --rc genhtml_function_coverage=1 00:16:48.919 --rc genhtml_legend=1 00:16:48.919 --rc geninfo_all_blocks=1 00:16:48.919 --rc geninfo_unexecuted_blocks=1 00:16:48.919 00:16:48.919 ' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:48.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.919 --rc genhtml_branch_coverage=1 00:16:48.919 --rc genhtml_function_coverage=1 00:16:48.919 --rc genhtml_legend=1 00:16:48.919 --rc geninfo_all_blocks=1 00:16:48.919 --rc geninfo_unexecuted_blocks=1 00:16:48.919 00:16:48.919 ' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.919 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.919 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.920 Cannot find device "nvmf_init_br" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:48.920 Cannot find device "nvmf_init_br2" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:48.920 Cannot find device "nvmf_tgt_br" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.920 Cannot find device "nvmf_tgt_br2" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:48.920 Cannot find device "nvmf_init_br" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:48.920 Cannot find device "nvmf_init_br2" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:48.920 Cannot find device "nvmf_tgt_br" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:48.920 Cannot find device "nvmf_tgt_br2" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:48.920 Cannot find device "nvmf_br" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:48.920 Cannot find device "nvmf_init_if" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:48.920 Cannot find device "nvmf_init_if2" 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:48.920 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:49.179 14:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:49.179 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.179 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:49.179 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:49.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:49.179 00:16:49.179 --- 10.0.0.3 ping statistics --- 00:16:49.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.179 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:49.179 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:49.179 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:49.179 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:16:49.179 00:16:49.179 --- 10.0.0.4 ping statistics --- 00:16:49.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.179 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:49.179 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:49.438 00:16:49.438 --- 10.0.0.1 ping statistics --- 00:16:49.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.438 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:49.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:49.438 00:16:49.438 --- 10.0.0.2 ping statistics --- 00:16:49.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.438 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=78002 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 78002 00:16:49.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.438 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78002 ']' 00:16:49.439 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.439 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.439 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.439 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.439 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.439 [2024-12-10 14:22:14.108675] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:16:49.439 [2024-12-10 14:22:14.108786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.439 [2024-12-10 14:22:14.262291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.698 [2024-12-10 14:22:14.300338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.698 [2024-12-10 14:22:14.300412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.698 [2024-12-10 14:22:14.300433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.698 [2024-12-10 14:22:14.300443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.698 [2024-12-10 14:22:14.300451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.698 [2024-12-10 14:22:14.300829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.698 [2024-12-10 14:22:14.334927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.698 [2024-12-10 14:22:14.438370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.698 [2024-12-10 14:22:14.446521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:49.698 null0 00:16:49.698 [2024-12-10 14:22:14.478449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=78026 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 78026 /tmp/host.sock 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78026 ']' 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.698 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.698 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.957 [2024-12-10 14:22:14.561048] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:16:49.957 [2024-12-10 14:22:14.561147] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78026 ] 00:16:49.957 [2024-12-10 14:22:14.707518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.957 [2024-12-10 14:22:14.747888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.216 [2024-12-10 14:22:14.869619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.216 14:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.151 [2024-12-10 14:22:15.911295] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:51.151 [2024-12-10 14:22:15.911345] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:51.151 [2024-12-10 14:22:15.911372] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:51.151 [2024-12-10 14:22:15.917362] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:51.151 [2024-12-10 14:22:15.971732] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:51.151 [2024-12-10 14:22:15.972672] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x244ef00:1 started. 00:16:51.151 [2024-12-10 14:22:15.974372] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:51.151 [2024-12-10 14:22:15.974446] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:51.151 [2024-12-10 14:22:15.974473] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:51.151 [2024-12-10 14:22:15.974489] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:51.151 [2024-12-10 14:22:15.974512] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.151 [2024-12-10 14:22:15.980125] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x244ef00 was disconnected and freed. delete nvme_qpair. 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.151 14:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.410 14:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.347 14:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.722 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.722 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.722 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.723 14:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.658 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.659 14:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:55.594 14:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.528 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.787 [2024-12-10 14:22:21.402211] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:56.787 [2024-12-10 14:22:21.402275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.787 [2024-12-10 14:22:21.402290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.787 [2024-12-10 14:22:21.402304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.787 [2024-12-10 14:22:21.402314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.787 [2024-12-10 14:22:21.402324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.787 [2024-12-10 14:22:21.402335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.787 [2024-12-10 14:22:21.402345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.787 [2024-12-10 14:22:21.402354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.787 [2024-12-10 14:22:21.402365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.787 [2024-12-10 14:22:21.402375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.787 [2024-12-10 14:22:21.402384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242afc0 is same with the state(6) to be set 00:16:56.787 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.787 14:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.787 [2024-12-10 14:22:21.412205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242afc0 (9): Bad file descriptor 00:16:56.787 [2024-12-10 14:22:21.422222] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:56.787 [2024-12-10 14:22:21.422250] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:56.787 [2024-12-10 14:22:21.422257] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:56.787 [2024-12-10 14:22:21.422263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:56.787 [2024-12-10 14:22:21.422308] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.739 [2024-12-10 14:22:22.470089] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:57.739 [2024-12-10 14:22:22.470555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242afc0 with addr=10.0.0.3, port=4420 00:16:57.739 [2024-12-10 14:22:22.470607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242afc0 is same with the state(6) to be set 00:16:57.739 [2024-12-10 14:22:22.470678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242afc0 (9): Bad file descriptor 00:16:57.739 [2024-12-10 14:22:22.471645] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:57.739 [2024-12-10 14:22:22.471744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:57.739 [2024-12-10 14:22:22.471770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:57.739 [2024-12-10 14:22:22.471792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:57.739 [2024-12-10 14:22:22.471812] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:57.739 [2024-12-10 14:22:22.471825] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:57.739 [2024-12-10 14:22:22.471837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:57.739 [2024-12-10 14:22:22.471857] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:57.739 [2024-12-10 14:22:22.471869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.739 14:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.675 [2024-12-10 14:22:23.471945] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:58.675 [2024-12-10 14:22:23.472010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:58.675 [2024-12-10 14:22:23.472041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:58.675 [2024-12-10 14:22:23.472052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:58.675 [2024-12-10 14:22:23.472064] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:58.675 [2024-12-10 14:22:23.472073] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:58.675 [2024-12-10 14:22:23.472080] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:58.675 [2024-12-10 14:22:23.472086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:58.675 [2024-12-10 14:22:23.472121] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:58.675 [2024-12-10 14:22:23.472171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.675 [2024-12-10 14:22:23.472186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.675 [2024-12-10 14:22:23.472200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.675 [2024-12-10 14:22:23.472209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.675 [2024-12-10 14:22:23.472220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.675 [2024-12-10 14:22:23.472229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.675 [2024-12-10 14:22:23.472239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.675 [2024-12-10 14:22:23.472249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.675 [2024-12-10 14:22:23.472259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.675 [2024-12-10 14:22:23.472269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.675 [2024-12-10 14:22:23.472278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:58.675 [2024-12-10 14:22:23.472691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b6a20 (9): Bad file descriptor 00:16:58.675 [2024-12-10 14:22:23.473704] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:58.675 [2024-12-10 14:22:23.473730] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.675 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:58.934 14:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:59.868 14:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.811 [2024-12-10 14:22:25.486377] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:00.811 [2024-12-10 14:22:25.486412] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:00.811 [2024-12-10 14:22:25.486432] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:00.811 [2024-12-10 14:22:25.492418] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:00.811 [2024-12-10 14:22:25.546874] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:00.811 [2024-12-10 14:22:25.547615] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x24571d0:1 started. 00:17:00.811 [2024-12-10 14:22:25.548843] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:00.811 [2024-12-10 14:22:25.548887] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:00.811 [2024-12-10 14:22:25.548911] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:00.811 [2024-12-10 14:22:25.548927] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:00.811 [2024-12-10 14:22:25.548936] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:00.811 [2024-12-10 14:22:25.555037] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x24571d0 was disconnected and freed. delete nvme_qpair. 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 78026 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78026 ']' 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78026 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78026 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.070 killing process with pid 78026 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78026' 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78026 00:17:01.070 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78026 00:17:01.330 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:01.330 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.330 14:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.330 rmmod nvme_tcp 00:17:01.330 rmmod nvme_fabrics 00:17:01.330 rmmod nvme_keyring 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 78002 ']' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 78002 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78002 ']' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78002 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78002 00:17:01.330 killing process with pid 78002 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78002' 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78002 00:17:01.330 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78002 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.589 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:01.848 ************************************ 00:17:01.848 END TEST nvmf_discovery_remove_ifc 00:17:01.848 ************************************ 00:17:01.848 00:17:01.848 real 0m13.075s 00:17:01.848 user 0m22.289s 00:17:01.848 sys 0m2.363s 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.848 ************************************ 00:17:01.848 START TEST nvmf_identify_kernel_target 00:17:01.848 ************************************ 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.848 * Looking for test storage... 00:17:01.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.848 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:02.108 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.109 --rc genhtml_branch_coverage=1 00:17:02.109 --rc genhtml_function_coverage=1 00:17:02.109 --rc genhtml_legend=1 00:17:02.109 --rc geninfo_all_blocks=1 00:17:02.109 --rc geninfo_unexecuted_blocks=1 00:17:02.109 00:17:02.109 ' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.109 --rc genhtml_branch_coverage=1 00:17:02.109 --rc genhtml_function_coverage=1 00:17:02.109 --rc genhtml_legend=1 00:17:02.109 --rc geninfo_all_blocks=1 00:17:02.109 --rc geninfo_unexecuted_blocks=1 00:17:02.109 00:17:02.109 ' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.109 --rc genhtml_branch_coverage=1 00:17:02.109 --rc genhtml_function_coverage=1 00:17:02.109 --rc genhtml_legend=1 00:17:02.109 --rc geninfo_all_blocks=1 00:17:02.109 --rc geninfo_unexecuted_blocks=1 00:17:02.109 00:17:02.109 ' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.109 --rc genhtml_branch_coverage=1 00:17:02.109 --rc genhtml_function_coverage=1 00:17:02.109 --rc genhtml_legend=1 00:17:02.109 --rc geninfo_all_blocks=1 00:17:02.109 --rc geninfo_unexecuted_blocks=1 00:17:02.109 00:17:02.109 ' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.109 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.109 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:02.110 Cannot find device "nvmf_init_br" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:02.110 Cannot find device "nvmf_init_br2" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:02.110 Cannot find device "nvmf_tgt_br" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.110 Cannot find device "nvmf_tgt_br2" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:02.110 Cannot find device "nvmf_init_br" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:02.110 Cannot find device "nvmf_init_br2" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:02.110 Cannot find device "nvmf_tgt_br" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:02.110 Cannot find device "nvmf_tgt_br2" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:02.110 Cannot find device "nvmf_br" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:02.110 Cannot find device "nvmf_init_if" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:02.110 Cannot find device "nvmf_init_if2" 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.110 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.369 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.369 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.369 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.369 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:02.369 14:22:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:02.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:02.369 00:17:02.369 --- 10.0.0.3 ping statistics --- 00:17:02.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.369 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:02.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:02.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:17:02.369 00:17:02.369 --- 10.0.0.4 ping statistics --- 00:17:02.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.369 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:02.369 00:17:02.369 --- 10.0.0.1 ping statistics --- 00:17:02.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.369 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:02.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:02.369 00:17:02.369 --- 10.0.0.2 ping statistics --- 00:17:02.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.369 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:02.369 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:02.370 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.937 Waiting for block devices as requested 00:17:02.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:02.937 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:03.196 No valid GPT data, bailing 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:03.196 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:03.197 No valid GPT data, bailing 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:03.197 No valid GPT data, bailing 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:03.197 14:22:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:03.197 No valid GPT data, bailing 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -a 10.0.0.1 -t tcp -s 4420 00:17:03.456 00:17:03.456 Discovery Log Number of Records 2, Generation counter 2 00:17:03.456 =====Discovery Log Entry 0====== 00:17:03.456 trtype: tcp 00:17:03.456 adrfam: ipv4 00:17:03.456 subtype: current discovery subsystem 00:17:03.456 treq: not specified, sq flow control disable supported 00:17:03.456 portid: 1 00:17:03.456 trsvcid: 4420 00:17:03.456 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:03.456 traddr: 10.0.0.1 00:17:03.456 eflags: none 00:17:03.456 sectype: none 00:17:03.456 =====Discovery Log Entry 1====== 00:17:03.456 trtype: tcp 00:17:03.456 adrfam: ipv4 00:17:03.456 subtype: nvme subsystem 00:17:03.456 treq: not specified, sq flow control disable supported 00:17:03.456 portid: 1 00:17:03.456 trsvcid: 4420 00:17:03.456 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:03.456 traddr: 10.0.0.1 00:17:03.456 eflags: none 00:17:03.456 sectype: none 00:17:03.456 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:03.456 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:03.456 ===================================================== 00:17:03.456 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:03.456 ===================================================== 00:17:03.456 Controller Capabilities/Features 00:17:03.456 ================================ 00:17:03.456 Vendor ID: 0000 00:17:03.456 Subsystem Vendor ID: 0000 00:17:03.456 Serial Number: ea4dc8070f0d0d9c0b98 00:17:03.456 Model Number: Linux 00:17:03.456 Firmware Version: 6.8.9-20 00:17:03.456 Recommended Arb Burst: 0 00:17:03.456 IEEE OUI Identifier: 00 00 00 00:17:03.456 Multi-path I/O 00:17:03.456 May have multiple subsystem ports: No 00:17:03.456 May have multiple controllers: No 00:17:03.456 Associated with SR-IOV VF: No 00:17:03.456 Max Data Transfer Size: Unlimited 00:17:03.456 Max Number of Namespaces: 0 00:17:03.456 Max Number of I/O Queues: 1024 00:17:03.456 NVMe Specification Version (VS): 1.3 00:17:03.456 NVMe Specification Version (Identify): 1.3 00:17:03.456 Maximum Queue Entries: 1024 00:17:03.456 Contiguous Queues Required: No 00:17:03.456 Arbitration Mechanisms Supported 00:17:03.456 Weighted Round Robin: Not Supported 00:17:03.456 Vendor Specific: Not Supported 00:17:03.456 Reset Timeout: 7500 ms 00:17:03.456 Doorbell Stride: 4 bytes 00:17:03.456 NVM Subsystem Reset: Not Supported 00:17:03.456 Command Sets Supported 00:17:03.456 NVM Command Set: Supported 00:17:03.456 Boot Partition: Not Supported 00:17:03.456 Memory Page Size Minimum: 4096 bytes 00:17:03.456 Memory Page Size Maximum: 4096 bytes 00:17:03.456 Persistent Memory Region: Not Supported 00:17:03.456 Optional Asynchronous Events Supported 00:17:03.456 Namespace Attribute Notices: Not Supported 00:17:03.456 Firmware Activation Notices: Not Supported 00:17:03.456 ANA Change Notices: Not Supported 00:17:03.456 PLE Aggregate Log Change Notices: Not Supported 00:17:03.456 LBA Status Info Alert Notices: Not Supported 00:17:03.456 EGE Aggregate Log Change Notices: Not Supported 00:17:03.456 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.457 Zone Descriptor Change Notices: Not Supported 00:17:03.457 Discovery Log Change Notices: Supported 00:17:03.457 Controller Attributes 00:17:03.457 128-bit Host Identifier: Not Supported 00:17:03.457 Non-Operational Permissive Mode: Not Supported 00:17:03.457 NVM Sets: Not Supported 00:17:03.457 Read Recovery Levels: Not Supported 00:17:03.457 Endurance Groups: Not Supported 00:17:03.457 Predictable Latency Mode: Not Supported 00:17:03.457 Traffic Based Keep ALive: Not Supported 00:17:03.457 Namespace Granularity: Not Supported 00:17:03.457 SQ Associations: Not Supported 00:17:03.457 UUID List: Not Supported 00:17:03.457 Multi-Domain Subsystem: Not Supported 00:17:03.457 Fixed Capacity Management: Not Supported 00:17:03.457 Variable Capacity Management: Not Supported 00:17:03.457 Delete Endurance Group: Not Supported 00:17:03.457 Delete NVM Set: Not Supported 00:17:03.457 Extended LBA Formats Supported: Not Supported 00:17:03.457 Flexible Data Placement Supported: Not Supported 00:17:03.457 00:17:03.457 Controller Memory Buffer Support 00:17:03.457 ================================ 00:17:03.457 Supported: No 00:17:03.457 00:17:03.457 Persistent Memory Region Support 00:17:03.457 ================================ 00:17:03.457 Supported: No 00:17:03.457 00:17:03.457 Admin Command Set Attributes 00:17:03.457 ============================ 00:17:03.457 Security Send/Receive: Not Supported 00:17:03.457 Format NVM: Not Supported 00:17:03.457 Firmware Activate/Download: Not Supported 00:17:03.457 Namespace Management: Not Supported 00:17:03.457 Device Self-Test: Not Supported 00:17:03.457 Directives: Not Supported 00:17:03.457 NVMe-MI: Not Supported 00:17:03.457 Virtualization Management: Not Supported 00:17:03.457 Doorbell Buffer Config: Not Supported 00:17:03.457 Get LBA Status Capability: Not Supported 00:17:03.457 Command & Feature Lockdown Capability: Not Supported 00:17:03.457 Abort Command Limit: 1 00:17:03.457 Async Event Request Limit: 1 00:17:03.457 Number of Firmware Slots: N/A 00:17:03.457 Firmware Slot 1 Read-Only: N/A 00:17:03.716 Firmware Activation Without Reset: N/A 00:17:03.716 Multiple Update Detection Support: N/A 00:17:03.716 Firmware Update Granularity: No Information Provided 00:17:03.716 Per-Namespace SMART Log: No 00:17:03.716 Asymmetric Namespace Access Log Page: Not Supported 00:17:03.716 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:03.716 Command Effects Log Page: Not Supported 00:17:03.716 Get Log Page Extended Data: Supported 00:17:03.716 Telemetry Log Pages: Not Supported 00:17:03.716 Persistent Event Log Pages: Not Supported 00:17:03.717 Supported Log Pages Log Page: May Support 00:17:03.717 Commands Supported & Effects Log Page: Not Supported 00:17:03.717 Feature Identifiers & Effects Log Page:May Support 00:17:03.717 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.717 Data Area 4 for Telemetry Log: Not Supported 00:17:03.717 Error Log Page Entries Supported: 1 00:17:03.717 Keep Alive: Not Supported 00:17:03.717 00:17:03.717 NVM Command Set Attributes 00:17:03.717 ========================== 00:17:03.717 Submission Queue Entry Size 00:17:03.717 Max: 1 00:17:03.717 Min: 1 00:17:03.717 Completion Queue Entry Size 00:17:03.717 Max: 1 00:17:03.717 Min: 1 00:17:03.717 Number of Namespaces: 0 00:17:03.717 Compare Command: Not Supported 00:17:03.717 Write Uncorrectable Command: Not Supported 00:17:03.717 Dataset Management Command: Not Supported 00:17:03.717 Write Zeroes Command: Not Supported 00:17:03.717 Set Features Save Field: Not Supported 00:17:03.717 Reservations: Not Supported 00:17:03.717 Timestamp: Not Supported 00:17:03.717 Copy: Not Supported 00:17:03.717 Volatile Write Cache: Not Present 00:17:03.717 Atomic Write Unit (Normal): 1 00:17:03.717 Atomic Write Unit (PFail): 1 00:17:03.717 Atomic Compare & Write Unit: 1 00:17:03.717 Fused Compare & Write: Not Supported 00:17:03.717 Scatter-Gather List 00:17:03.717 SGL Command Set: Supported 00:17:03.717 SGL Keyed: Not Supported 00:17:03.717 SGL Bit Bucket Descriptor: Not Supported 00:17:03.717 SGL Metadata Pointer: Not Supported 00:17:03.717 Oversized SGL: Not Supported 00:17:03.717 SGL Metadata Address: Not Supported 00:17:03.717 SGL Offset: Supported 00:17:03.717 Transport SGL Data Block: Not Supported 00:17:03.717 Replay Protected Memory Block: Not Supported 00:17:03.717 00:17:03.717 Firmware Slot Information 00:17:03.717 ========================= 00:17:03.717 Active slot: 0 00:17:03.717 00:17:03.717 00:17:03.717 Error Log 00:17:03.717 ========= 00:17:03.717 00:17:03.717 Active Namespaces 00:17:03.717 ================= 00:17:03.717 Discovery Log Page 00:17:03.717 ================== 00:17:03.717 Generation Counter: 2 00:17:03.717 Number of Records: 2 00:17:03.717 Record Format: 0 00:17:03.717 00:17:03.717 Discovery Log Entry 0 00:17:03.717 ---------------------- 00:17:03.717 Transport Type: 3 (TCP) 00:17:03.717 Address Family: 1 (IPv4) 00:17:03.717 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:03.717 Entry Flags: 00:17:03.717 Duplicate Returned Information: 0 00:17:03.717 Explicit Persistent Connection Support for Discovery: 0 00:17:03.717 Transport Requirements: 00:17:03.717 Secure Channel: Not Specified 00:17:03.717 Port ID: 1 (0x0001) 00:17:03.717 Controller ID: 65535 (0xffff) 00:17:03.717 Admin Max SQ Size: 32 00:17:03.717 Transport Service Identifier: 4420 00:17:03.717 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:03.717 Transport Address: 10.0.0.1 00:17:03.717 Discovery Log Entry 1 00:17:03.717 ---------------------- 00:17:03.717 Transport Type: 3 (TCP) 00:17:03.717 Address Family: 1 (IPv4) 00:17:03.717 Subsystem Type: 2 (NVM Subsystem) 00:17:03.717 Entry Flags: 00:17:03.717 Duplicate Returned Information: 0 00:17:03.717 Explicit Persistent Connection Support for Discovery: 0 00:17:03.717 Transport Requirements: 00:17:03.717 Secure Channel: Not Specified 00:17:03.717 Port ID: 1 (0x0001) 00:17:03.717 Controller ID: 65535 (0xffff) 00:17:03.717 Admin Max SQ Size: 32 00:17:03.717 Transport Service Identifier: 4420 00:17:03.717 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:03.717 Transport Address: 10.0.0.1 00:17:03.717 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:03.717 get_feature(0x01) failed 00:17:03.717 get_feature(0x02) failed 00:17:03.717 get_feature(0x04) failed 00:17:03.717 ===================================================== 00:17:03.717 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:03.717 ===================================================== 00:17:03.717 Controller Capabilities/Features 00:17:03.717 ================================ 00:17:03.717 Vendor ID: 0000 00:17:03.717 Subsystem Vendor ID: 0000 00:17:03.717 Serial Number: 97d072a3d713e39abedf 00:17:03.717 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:03.717 Firmware Version: 6.8.9-20 00:17:03.717 Recommended Arb Burst: 6 00:17:03.717 IEEE OUI Identifier: 00 00 00 00:17:03.717 Multi-path I/O 00:17:03.717 May have multiple subsystem ports: Yes 00:17:03.717 May have multiple controllers: Yes 00:17:03.717 Associated with SR-IOV VF: No 00:17:03.717 Max Data Transfer Size: Unlimited 00:17:03.717 Max Number of Namespaces: 1024 00:17:03.717 Max Number of I/O Queues: 128 00:17:03.717 NVMe Specification Version (VS): 1.3 00:17:03.717 NVMe Specification Version (Identify): 1.3 00:17:03.717 Maximum Queue Entries: 1024 00:17:03.717 Contiguous Queues Required: No 00:17:03.717 Arbitration Mechanisms Supported 00:17:03.717 Weighted Round Robin: Not Supported 00:17:03.717 Vendor Specific: Not Supported 00:17:03.717 Reset Timeout: 7500 ms 00:17:03.717 Doorbell Stride: 4 bytes 00:17:03.717 NVM Subsystem Reset: Not Supported 00:17:03.717 Command Sets Supported 00:17:03.717 NVM Command Set: Supported 00:17:03.717 Boot Partition: Not Supported 00:17:03.717 Memory Page Size Minimum: 4096 bytes 00:17:03.717 Memory Page Size Maximum: 4096 bytes 00:17:03.717 Persistent Memory Region: Not Supported 00:17:03.717 Optional Asynchronous Events Supported 00:17:03.717 Namespace Attribute Notices: Supported 00:17:03.717 Firmware Activation Notices: Not Supported 00:17:03.717 ANA Change Notices: Supported 00:17:03.717 PLE Aggregate Log Change Notices: Not Supported 00:17:03.717 LBA Status Info Alert Notices: Not Supported 00:17:03.717 EGE Aggregate Log Change Notices: Not Supported 00:17:03.717 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.717 Zone Descriptor Change Notices: Not Supported 00:17:03.717 Discovery Log Change Notices: Not Supported 00:17:03.717 Controller Attributes 00:17:03.717 128-bit Host Identifier: Supported 00:17:03.717 Non-Operational Permissive Mode: Not Supported 00:17:03.717 NVM Sets: Not Supported 00:17:03.717 Read Recovery Levels: Not Supported 00:17:03.717 Endurance Groups: Not Supported 00:17:03.717 Predictable Latency Mode: Not Supported 00:17:03.717 Traffic Based Keep ALive: Supported 00:17:03.717 Namespace Granularity: Not Supported 00:17:03.717 SQ Associations: Not Supported 00:17:03.717 UUID List: Not Supported 00:17:03.717 Multi-Domain Subsystem: Not Supported 00:17:03.717 Fixed Capacity Management: Not Supported 00:17:03.717 Variable Capacity Management: Not Supported 00:17:03.717 Delete Endurance Group: Not Supported 00:17:03.717 Delete NVM Set: Not Supported 00:17:03.717 Extended LBA Formats Supported: Not Supported 00:17:03.717 Flexible Data Placement Supported: Not Supported 00:17:03.717 00:17:03.717 Controller Memory Buffer Support 00:17:03.717 ================================ 00:17:03.717 Supported: No 00:17:03.717 00:17:03.717 Persistent Memory Region Support 00:17:03.717 ================================ 00:17:03.717 Supported: No 00:17:03.717 00:17:03.717 Admin Command Set Attributes 00:17:03.717 ============================ 00:17:03.717 Security Send/Receive: Not Supported 00:17:03.717 Format NVM: Not Supported 00:17:03.717 Firmware Activate/Download: Not Supported 00:17:03.717 Namespace Management: Not Supported 00:17:03.717 Device Self-Test: Not Supported 00:17:03.717 Directives: Not Supported 00:17:03.717 NVMe-MI: Not Supported 00:17:03.717 Virtualization Management: Not Supported 00:17:03.717 Doorbell Buffer Config: Not Supported 00:17:03.717 Get LBA Status Capability: Not Supported 00:17:03.717 Command & Feature Lockdown Capability: Not Supported 00:17:03.717 Abort Command Limit: 4 00:17:03.717 Async Event Request Limit: 4 00:17:03.717 Number of Firmware Slots: N/A 00:17:03.717 Firmware Slot 1 Read-Only: N/A 00:17:03.717 Firmware Activation Without Reset: N/A 00:17:03.717 Multiple Update Detection Support: N/A 00:17:03.717 Firmware Update Granularity: No Information Provided 00:17:03.717 Per-Namespace SMART Log: Yes 00:17:03.717 Asymmetric Namespace Access Log Page: Supported 00:17:03.717 ANA Transition Time : 10 sec 00:17:03.717 00:17:03.717 Asymmetric Namespace Access Capabilities 00:17:03.717 ANA Optimized State : Supported 00:17:03.717 ANA Non-Optimized State : Supported 00:17:03.717 ANA Inaccessible State : Supported 00:17:03.717 ANA Persistent Loss State : Supported 00:17:03.717 ANA Change State : Supported 00:17:03.717 ANAGRPID is not changed : No 00:17:03.717 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:03.717 00:17:03.717 ANA Group Identifier Maximum : 128 00:17:03.717 Number of ANA Group Identifiers : 128 00:17:03.717 Max Number of Allowed Namespaces : 1024 00:17:03.717 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:03.718 Command Effects Log Page: Supported 00:17:03.718 Get Log Page Extended Data: Supported 00:17:03.718 Telemetry Log Pages: Not Supported 00:17:03.718 Persistent Event Log Pages: Not Supported 00:17:03.718 Supported Log Pages Log Page: May Support 00:17:03.718 Commands Supported & Effects Log Page: Not Supported 00:17:03.718 Feature Identifiers & Effects Log Page:May Support 00:17:03.718 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.718 Data Area 4 for Telemetry Log: Not Supported 00:17:03.718 Error Log Page Entries Supported: 128 00:17:03.718 Keep Alive: Supported 00:17:03.718 Keep Alive Granularity: 1000 ms 00:17:03.718 00:17:03.718 NVM Command Set Attributes 00:17:03.718 ========================== 00:17:03.718 Submission Queue Entry Size 00:17:03.718 Max: 64 00:17:03.718 Min: 64 00:17:03.718 Completion Queue Entry Size 00:17:03.718 Max: 16 00:17:03.718 Min: 16 00:17:03.718 Number of Namespaces: 1024 00:17:03.718 Compare Command: Not Supported 00:17:03.718 Write Uncorrectable Command: Not Supported 00:17:03.718 Dataset Management Command: Supported 00:17:03.718 Write Zeroes Command: Supported 00:17:03.718 Set Features Save Field: Not Supported 00:17:03.718 Reservations: Not Supported 00:17:03.718 Timestamp: Not Supported 00:17:03.718 Copy: Not Supported 00:17:03.718 Volatile Write Cache: Present 00:17:03.718 Atomic Write Unit (Normal): 1 00:17:03.718 Atomic Write Unit (PFail): 1 00:17:03.718 Atomic Compare & Write Unit: 1 00:17:03.718 Fused Compare & Write: Not Supported 00:17:03.718 Scatter-Gather List 00:17:03.718 SGL Command Set: Supported 00:17:03.718 SGL Keyed: Not Supported 00:17:03.718 SGL Bit Bucket Descriptor: Not Supported 00:17:03.718 SGL Metadata Pointer: Not Supported 00:17:03.718 Oversized SGL: Not Supported 00:17:03.718 SGL Metadata Address: Not Supported 00:17:03.718 SGL Offset: Supported 00:17:03.718 Transport SGL Data Block: Not Supported 00:17:03.718 Replay Protected Memory Block: Not Supported 00:17:03.718 00:17:03.718 Firmware Slot Information 00:17:03.718 ========================= 00:17:03.718 Active slot: 0 00:17:03.718 00:17:03.718 Asymmetric Namespace Access 00:17:03.718 =========================== 00:17:03.718 Change Count : 0 00:17:03.718 Number of ANA Group Descriptors : 1 00:17:03.718 ANA Group Descriptor : 0 00:17:03.718 ANA Group ID : 1 00:17:03.718 Number of NSID Values : 1 00:17:03.718 Change Count : 0 00:17:03.718 ANA State : 1 00:17:03.718 Namespace Identifier : 1 00:17:03.718 00:17:03.718 Commands Supported and Effects 00:17:03.718 ============================== 00:17:03.718 Admin Commands 00:17:03.718 -------------- 00:17:03.718 Get Log Page (02h): Supported 00:17:03.718 Identify (06h): Supported 00:17:03.718 Abort (08h): Supported 00:17:03.718 Set Features (09h): Supported 00:17:03.718 Get Features (0Ah): Supported 00:17:03.718 Asynchronous Event Request (0Ch): Supported 00:17:03.718 Keep Alive (18h): Supported 00:17:03.718 I/O Commands 00:17:03.718 ------------ 00:17:03.718 Flush (00h): Supported 00:17:03.718 Write (01h): Supported LBA-Change 00:17:03.718 Read (02h): Supported 00:17:03.718 Write Zeroes (08h): Supported LBA-Change 00:17:03.718 Dataset Management (09h): Supported 00:17:03.718 00:17:03.718 Error Log 00:17:03.718 ========= 00:17:03.718 Entry: 0 00:17:03.718 Error Count: 0x3 00:17:03.718 Submission Queue Id: 0x0 00:17:03.718 Command Id: 0x5 00:17:03.718 Phase Bit: 0 00:17:03.718 Status Code: 0x2 00:17:03.718 Status Code Type: 0x0 00:17:03.718 Do Not Retry: 1 00:17:03.718 Error Location: 0x28 00:17:03.718 LBA: 0x0 00:17:03.718 Namespace: 0x0 00:17:03.718 Vendor Log Page: 0x0 00:17:03.718 ----------- 00:17:03.718 Entry: 1 00:17:03.718 Error Count: 0x2 00:17:03.718 Submission Queue Id: 0x0 00:17:03.718 Command Id: 0x5 00:17:03.718 Phase Bit: 0 00:17:03.718 Status Code: 0x2 00:17:03.718 Status Code Type: 0x0 00:17:03.718 Do Not Retry: 1 00:17:03.718 Error Location: 0x28 00:17:03.718 LBA: 0x0 00:17:03.718 Namespace: 0x0 00:17:03.718 Vendor Log Page: 0x0 00:17:03.718 ----------- 00:17:03.718 Entry: 2 00:17:03.718 Error Count: 0x1 00:17:03.718 Submission Queue Id: 0x0 00:17:03.718 Command Id: 0x4 00:17:03.718 Phase Bit: 0 00:17:03.718 Status Code: 0x2 00:17:03.718 Status Code Type: 0x0 00:17:03.718 Do Not Retry: 1 00:17:03.718 Error Location: 0x28 00:17:03.718 LBA: 0x0 00:17:03.718 Namespace: 0x0 00:17:03.718 Vendor Log Page: 0x0 00:17:03.718 00:17:03.718 Number of Queues 00:17:03.718 ================ 00:17:03.718 Number of I/O Submission Queues: 128 00:17:03.718 Number of I/O Completion Queues: 128 00:17:03.718 00:17:03.718 ZNS Specific Controller Data 00:17:03.718 ============================ 00:17:03.718 Zone Append Size Limit: 0 00:17:03.718 00:17:03.718 00:17:03.718 Active Namespaces 00:17:03.718 ================= 00:17:03.718 get_feature(0x05) failed 00:17:03.718 Namespace ID:1 00:17:03.718 Command Set Identifier: NVM (00h) 00:17:03.718 Deallocate: Supported 00:17:03.718 Deallocated/Unwritten Error: Not Supported 00:17:03.718 Deallocated Read Value: Unknown 00:17:03.718 Deallocate in Write Zeroes: Not Supported 00:17:03.718 Deallocated Guard Field: 0xFFFF 00:17:03.718 Flush: Supported 00:17:03.718 Reservation: Not Supported 00:17:03.718 Namespace Sharing Capabilities: Multiple Controllers 00:17:03.718 Size (in LBAs): 1310720 (5GiB) 00:17:03.718 Capacity (in LBAs): 1310720 (5GiB) 00:17:03.718 Utilization (in LBAs): 1310720 (5GiB) 00:17:03.718 UUID: 8066e454-71d4-47e5-aee0-710916c4de72 00:17:03.718 Thin Provisioning: Not Supported 00:17:03.718 Per-NS Atomic Units: Yes 00:17:03.718 Atomic Boundary Size (Normal): 0 00:17:03.718 Atomic Boundary Size (PFail): 0 00:17:03.718 Atomic Boundary Offset: 0 00:17:03.718 NGUID/EUI64 Never Reused: No 00:17:03.718 ANA group ID: 1 00:17:03.718 Namespace Write Protected: No 00:17:03.718 Number of LBA Formats: 1 00:17:03.718 Current LBA Format: LBA Format #00 00:17:03.718 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:03.718 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.718 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.978 rmmod nvme_tcp 00:17:03.978 rmmod nvme_fabrics 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.978 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:04.237 14:22:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.804 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:05.063 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:05.063 00:17:05.063 real 0m3.206s 00:17:05.063 user 0m1.127s 00:17:05.063 sys 0m1.456s 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.063 ************************************ 00:17:05.063 END TEST nvmf_identify_kernel_target 00:17:05.063 ************************************ 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.063 ************************************ 00:17:05.063 START TEST nvmf_auth_host 00:17:05.063 ************************************ 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:05.063 * Looking for test storage... 00:17:05.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:05.063 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.323 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:05.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.324 --rc genhtml_branch_coverage=1 00:17:05.324 --rc genhtml_function_coverage=1 00:17:05.324 --rc genhtml_legend=1 00:17:05.324 --rc geninfo_all_blocks=1 00:17:05.324 --rc geninfo_unexecuted_blocks=1 00:17:05.324 00:17:05.324 ' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:05.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.324 --rc genhtml_branch_coverage=1 00:17:05.324 --rc genhtml_function_coverage=1 00:17:05.324 --rc genhtml_legend=1 00:17:05.324 --rc geninfo_all_blocks=1 00:17:05.324 --rc geninfo_unexecuted_blocks=1 00:17:05.324 00:17:05.324 ' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:05.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.324 --rc genhtml_branch_coverage=1 00:17:05.324 --rc genhtml_function_coverage=1 00:17:05.324 --rc genhtml_legend=1 00:17:05.324 --rc geninfo_all_blocks=1 00:17:05.324 --rc geninfo_unexecuted_blocks=1 00:17:05.324 00:17:05.324 ' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:05.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.324 --rc genhtml_branch_coverage=1 00:17:05.324 --rc genhtml_function_coverage=1 00:17:05.324 --rc genhtml_legend=1 00:17:05.324 --rc geninfo_all_blocks=1 00:17:05.324 --rc geninfo_unexecuted_blocks=1 00:17:05.324 00:17:05.324 ' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.324 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.325 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.325 14:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.325 Cannot find device "nvmf_init_br" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.325 Cannot find device "nvmf_init_br2" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.325 Cannot find device "nvmf_tgt_br" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.325 Cannot find device "nvmf_tgt_br2" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.325 Cannot find device "nvmf_init_br" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.325 Cannot find device "nvmf_init_br2" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.325 Cannot find device "nvmf_tgt_br" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.325 Cannot find device "nvmf_tgt_br2" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.325 Cannot find device "nvmf_br" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.325 Cannot find device "nvmf_init_if" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.325 Cannot find device "nvmf_init_if2" 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.325 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:05.584 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:05.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:05.585 00:17:05.585 --- 10.0.0.3 ping statistics --- 00:17:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.585 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:05.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:05.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:05.585 00:17:05.585 --- 10.0.0.4 ping statistics --- 00:17:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.585 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:05.585 00:17:05.585 --- 10.0.0.1 ping statistics --- 00:17:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.585 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:05.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:05.585 00:17:05.585 --- 10.0.0.2 ping statistics --- 00:17:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.585 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=79010 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 79010 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79010 ']' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.585 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:06.153 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06045102b4743ce8ae2b1755c05c5eb0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.prp 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06045102b4743ce8ae2b1755c05c5eb0 0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06045102b4743ce8ae2b1755c05c5eb0 0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06045102b4743ce8ae2b1755c05c5eb0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.prp 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.prp 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.prp 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a8fb4b6dded9ac556f759fabc2e1189698c7a7e0afedf6ed211484efaddf436 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oMN 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a8fb4b6dded9ac556f759fabc2e1189698c7a7e0afedf6ed211484efaddf436 3 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a8fb4b6dded9ac556f759fabc2e1189698c7a7e0afedf6ed211484efaddf436 3 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a8fb4b6dded9ac556f759fabc2e1189698c7a7e0afedf6ed211484efaddf436 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oMN 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oMN 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oMN 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f3ac6c6ef9db275891978520a881a9d8409d3b7d6bd976d 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eIP 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f3ac6c6ef9db275891978520a881a9d8409d3b7d6bd976d 0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f3ac6c6ef9db275891978520a881a9d8409d3b7d6bd976d 0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f3ac6c6ef9db275891978520a881a9d8409d3b7d6bd976d 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eIP 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eIP 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eIP 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2930b50a5707c052b3480e980cca1f727784663c476c0049 00:17:06.154 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pl1 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2930b50a5707c052b3480e980cca1f727784663c476c0049 2 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2930b50a5707c052b3480e980cca1f727784663c476c0049 2 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.413 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2930b50a5707c052b3480e980cca1f727784663c476c0049 00:17:06.414 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:06.414 14:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pl1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pl1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Pl1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6bb5a28c88bd1c7df0c62ad2a2c840f7 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eYu 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6bb5a28c88bd1c7df0c62ad2a2c840f7 1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6bb5a28c88bd1c7df0c62ad2a2c840f7 1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6bb5a28c88bd1c7df0c62ad2a2c840f7 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eYu 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eYu 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eYu 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b824aebf0dad92d06e08ed5c8c01caa2 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Yyn 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b824aebf0dad92d06e08ed5c8c01caa2 1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b824aebf0dad92d06e08ed5c8c01caa2 1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b824aebf0dad92d06e08ed5c8c01caa2 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Yyn 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Yyn 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Yyn 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0452d97ce318b67c1d45447dd6522a633ec2a149b697d37b 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pWL 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0452d97ce318b67c1d45447dd6522a633ec2a149b697d37b 2 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0452d97ce318b67c1d45447dd6522a633ec2a149b697d37b 2 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0452d97ce318b67c1d45447dd6522a633ec2a149b697d37b 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pWL 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pWL 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.pWL 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7884010dc08eed83ad02daaf2fb2ade1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VY0 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7884010dc08eed83ad02daaf2fb2ade1 0 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7884010dc08eed83ad02daaf2fb2ade1 0 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7884010dc08eed83ad02daaf2fb2ade1 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:06.414 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.673 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VY0 00:17:06.673 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VY0 00:17:06.673 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VY0 00:17:06.673 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=676c0e7db8b18be8ae59ea99415371794aaf996fcdcbda483df900e784e44afd 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VMt 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 676c0e7db8b18be8ae59ea99415371794aaf996fcdcbda483df900e784e44afd 3 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 676c0e7db8b18be8ae59ea99415371794aaf996fcdcbda483df900e784e44afd 3 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=676c0e7db8b18be8ae59ea99415371794aaf996fcdcbda483df900e784e44afd 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VMt 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VMt 00:17:06.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VMt 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 79010 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79010 ']' 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.674 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.prp 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oMN ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oMN 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eIP 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Pl1 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pl1 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eYu 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Yyn ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yyn 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.pWL 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VY0 ]] 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VY0 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VMt 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:06.934 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:07.193 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:07.193 14:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:07.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.451 Waiting for block devices as requested 00:17:07.451 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.710 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:08.279 No valid GPT data, bailing 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:08.279 14:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:08.279 No valid GPT data, bailing 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:08.279 No valid GPT data, bailing 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:08.279 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:08.539 No valid GPT data, bailing 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -a 10.0.0.1 -t tcp -s 4420 00:17:08.539 00:17:08.539 Discovery Log Number of Records 2, Generation counter 2 00:17:08.539 =====Discovery Log Entry 0====== 00:17:08.539 trtype: tcp 00:17:08.539 adrfam: ipv4 00:17:08.539 subtype: current discovery subsystem 00:17:08.539 treq: not specified, sq flow control disable supported 00:17:08.539 portid: 1 00:17:08.539 trsvcid: 4420 00:17:08.539 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:08.539 traddr: 10.0.0.1 00:17:08.539 eflags: none 00:17:08.539 sectype: none 00:17:08.539 =====Discovery Log Entry 1====== 00:17:08.539 trtype: tcp 00:17:08.539 adrfam: ipv4 00:17:08.539 subtype: nvme subsystem 00:17:08.539 treq: not specified, sq flow control disable supported 00:17:08.539 portid: 1 00:17:08.539 trsvcid: 4420 00:17:08.539 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:08.539 traddr: 10.0.0.1 00:17:08.539 eflags: none 00:17:08.539 sectype: none 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.539 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.540 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.868 nvme0n1 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 nvme0n1 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.128 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.129 nvme0n1 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.129 14:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.388 nvme0n1 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.388 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.389 nvme0n1 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.389 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.648 nvme0n1 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.648 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 nvme0n1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.217 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.218 14:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.476 nvme0n1 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.476 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.477 nvme0n1 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.477 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 nvme0n1 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.736 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.995 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.996 nvme0n1 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.996 14:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.933 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.933 nvme0n1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.934 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.193 nvme0n1 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.193 14:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.193 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.452 nvme0n1 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.452 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:12.711 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.712 nvme0n1 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.712 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.971 nvme0n1 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.971 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.972 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.972 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.972 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.230 14:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.134 nvme0n1 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.134 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.393 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.393 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.393 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.393 14:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.393 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.394 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.653 nvme0n1 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.653 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.220 nvme0n1 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.221 14:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.480 nvme0n1 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.480 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.047 nvme0n1 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.047 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.048 14:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.615 nvme0n1 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.615 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.616 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.616 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.616 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 nvme0n1 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.183 14:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.183 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.750 nvme0n1 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.750 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.009 14:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 nvme0n1 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.577 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.145 nvme0n1 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.145 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.146 14:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.407 nvme0n1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.407 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.408 nvme0n1 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.408 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 nvme0n1 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.672 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.673 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.932 nvme0n1 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.932 nvme0n1 00:17:20.932 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.191 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.192 nvme0n1 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.192 14:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.192 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.450 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.450 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 nvme0n1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.451 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.710 nvme0n1 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.710 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.969 nvme0n1 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.969 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.970 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.229 nvme0n1 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.229 14:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 nvme0n1 00:17:22.488 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.489 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 nvme0n1 00:17:22.749 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.750 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.015 nvme0n1 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.015 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.274 nvme0n1 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.274 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.275 14:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.275 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.534 nvme0n1 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.534 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.535 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.535 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.535 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.794 nvme0n1 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.794 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.053 14:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.312 nvme0n1 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.312 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.880 nvme0n1 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:24.880 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.881 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.140 nvme0n1 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.140 14:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.708 nvme0n1 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.708 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.277 nvme0n1 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.277 14:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.277 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.845 nvme0n1 00:17:26.845 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.846 14:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.412 nvme0n1 00:17:27.412 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.412 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.412 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.412 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.412 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.671 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.238 nvme0n1 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.238 14:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 nvme0n1 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.806 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.065 nvme0n1 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.065 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.066 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 nvme0n1 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 nvme0n1 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.325 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 nvme0n1 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.584 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.585 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.844 nvme0n1 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.844 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 nvme0n1 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.845 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 nvme0n1 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.105 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 14:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 nvme0n1 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:30.364 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.365 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.626 nvme0n1 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.626 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.627 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.886 nvme0n1 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.886 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 nvme0n1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.146 14:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.405 nvme0n1 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.405 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.406 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.665 nvme0n1 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.665 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.666 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.925 nvme0n1 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.925 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.926 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.185 nvme0n1 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.185 14:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.753 nvme0n1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.753 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.012 nvme0n1 00:17:33.012 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.012 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.012 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.012 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.013 14:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 nvme0n1 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.581 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.840 nvme0n1 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.840 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.100 14:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.359 nvme0n1 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYwNDUxMDJiNDc0M2NlOGFlMmIxNzU1YzA1YzVlYjAzG4F3: 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE4ZmI0YjZkZGVkOWFjNTU2Zjc1OWZhYmMyZTExODk2OThjN2E3ZTBhZmVkZjZlZDIxMTQ4NGVmYWRkZjQzNhrATNs=: 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.359 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.360 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.928 nvme0n1 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.928 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.187 14:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.757 nvme0n1 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.757 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.758 14:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.326 nvme0n1 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.326 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQ1MmQ5N2NlMzE4YjY3YzFkNDU0NDdkZDY1MjJhNjMzZWMyYTE0OWI2OTdkMzdiBGuKmg==: 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: ]] 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg4NDAxMGRjMDhlZWQ4M2FkMDJkYWFmMmZiMmFkZTF+/Oiz: 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.585 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.586 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.163 nvme0n1 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.163 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njc2YzBlN2RiOGIxOGJlOGFlNTllYTk5NDE1MzcxNzk0YWFmOTk2ZmNkY2JkYTQ4M2RmOTAwZTc4NGU0NGFmZCVudY4=: 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.164 14:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.739 nvme0n1 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.739 request: 00:17:37.739 { 00:17:37.739 "name": "nvme0", 00:17:37.739 "trtype": "tcp", 00:17:37.739 "traddr": "10.0.0.1", 00:17:37.739 "adrfam": "ipv4", 00:17:37.739 "trsvcid": "4420", 00:17:37.739 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:37.739 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:37.739 "prchk_reftag": false, 00:17:37.739 "prchk_guard": false, 00:17:37.739 "hdgst": false, 00:17:37.739 "ddgst": false, 00:17:37.739 "allow_unrecognized_csi": false, 00:17:37.739 "method": "bdev_nvme_attach_controller", 00:17:37.739 "req_id": 1 00:17:37.739 } 00:17:37.739 Got JSON-RPC error response 00:17:37.739 response: 00:17:37.739 { 00:17:37.739 "code": -5, 00:17:37.739 "message": "Input/output error" 00:17:37.739 } 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.739 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.000 request: 00:17:38.000 { 00:17:38.000 "name": "nvme0", 00:17:38.000 "trtype": "tcp", 00:17:38.000 "traddr": "10.0.0.1", 00:17:38.000 "adrfam": "ipv4", 00:17:38.000 "trsvcid": "4420", 00:17:38.000 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:38.000 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:38.000 "prchk_reftag": false, 00:17:38.000 "prchk_guard": false, 00:17:38.000 "hdgst": false, 00:17:38.000 "ddgst": false, 00:17:38.000 "dhchap_key": "key2", 00:17:38.000 "allow_unrecognized_csi": false, 00:17:38.000 "method": "bdev_nvme_attach_controller", 00:17:38.000 "req_id": 1 00:17:38.000 } 00:17:38.000 Got JSON-RPC error response 00:17:38.000 response: 00:17:38.000 { 00:17:38.000 "code": -5, 00:17:38.000 "message": "Input/output error" 00:17:38.000 } 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.000 request: 00:17:38.000 { 00:17:38.000 "name": "nvme0", 00:17:38.000 "trtype": "tcp", 00:17:38.000 "traddr": "10.0.0.1", 00:17:38.000 "adrfam": "ipv4", 00:17:38.000 "trsvcid": "4420", 00:17:38.000 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:38.000 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:38.000 "prchk_reftag": false, 00:17:38.000 "prchk_guard": false, 00:17:38.000 "hdgst": false, 00:17:38.000 "ddgst": false, 00:17:38.000 "dhchap_key": "key1", 00:17:38.000 "dhchap_ctrlr_key": "ckey2", 00:17:38.000 "allow_unrecognized_csi": false, 00:17:38.000 "method": "bdev_nvme_attach_controller", 00:17:38.000 "req_id": 1 00:17:38.000 } 00:17:38.000 Got JSON-RPC error response 00:17:38.000 response: 00:17:38.000 { 00:17:38.000 "code": -5, 00:17:38.000 "message": "Input/output error" 00:17:38.000 } 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.000 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 nvme0n1 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 request: 00:17:38.260 { 00:17:38.260 "name": "nvme0", 00:17:38.260 "dhchap_key": "key1", 00:17:38.260 "dhchap_ctrlr_key": "ckey2", 00:17:38.260 "method": "bdev_nvme_set_keys", 00:17:38.260 "req_id": 1 00:17:38.260 } 00:17:38.260 Got JSON-RPC error response 00:17:38.260 response: 00:17:38.260 { 00:17:38.260 "code": -13, 00:17:38.260 "message": "Permission denied" 00:17:38.260 } 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.260 14:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.260 14:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.260 14:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:38.260 14:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYzYWM2YzZlZjlkYjI3NTg5MTk3ODUyMGE4ODFhOWQ4NDA5ZDNiN2Q2YmQ5NzZkchh/JA==: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkzMGI1MGE1NzA3YzA1MmIzNDgwZTk4MGNjYTFmNzI3Nzg0NjYzYzQ3NmMwMDQ58v+Ayg==: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 nvme0n1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmJiNWEyOGM4OGJkMWM3ZGYwYzYyYWQyYTJjODQwZjeKOtKI: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyNGFlYmYwZGFkOTJkMDZlMDhlZDVjOGMwMWNhYTLZkSVh: 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 request: 00:17:39.639 { 00:17:39.639 "name": "nvme0", 00:17:39.639 "dhchap_key": "key2", 00:17:39.639 "dhchap_ctrlr_key": "ckey1", 00:17:39.639 "method": "bdev_nvme_set_keys", 00:17:39.639 "req_id": 1 00:17:39.639 } 00:17:39.639 Got JSON-RPC error response 00:17:39.639 response: 00:17:39.639 { 00:17:39.639 "code": -13, 00:17:39.639 "message": "Permission denied" 00:17:39.639 } 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:39.639 14:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.576 rmmod nvme_tcp 00:17:40.576 rmmod nvme_fabrics 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 79010 ']' 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 79010 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 79010 ']' 00:17:40.576 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 79010 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79010 00:17:40.835 killing process with pid 79010 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79010' 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 79010 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 79010 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:40.835 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:41.094 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:41.095 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:41.095 14:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:42.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:42.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:42.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:42.030 14:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.prp /tmp/spdk.key-null.eIP /tmp/spdk.key-sha256.eYu /tmp/spdk.key-sha384.pWL /tmp/spdk.key-sha512.VMt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:42.030 14:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:42.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:42.597 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:42.597 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:42.597 00:17:42.598 real 0m37.417s 00:17:42.598 user 0m34.039s 00:17:42.598 sys 0m3.912s 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.598 ************************************ 00:17:42.598 END TEST nvmf_auth_host 00:17:42.598 ************************************ 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.598 ************************************ 00:17:42.598 START TEST nvmf_digest 00:17:42.598 ************************************ 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:42.598 * Looking for test storage... 00:17:42.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:17:42.598 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.857 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:42.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.858 --rc genhtml_branch_coverage=1 00:17:42.858 --rc genhtml_function_coverage=1 00:17:42.858 --rc genhtml_legend=1 00:17:42.858 --rc geninfo_all_blocks=1 00:17:42.858 --rc geninfo_unexecuted_blocks=1 00:17:42.858 00:17:42.858 ' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:42.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.858 --rc genhtml_branch_coverage=1 00:17:42.858 --rc genhtml_function_coverage=1 00:17:42.858 --rc genhtml_legend=1 00:17:42.858 --rc geninfo_all_blocks=1 00:17:42.858 --rc geninfo_unexecuted_blocks=1 00:17:42.858 00:17:42.858 ' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:42.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.858 --rc genhtml_branch_coverage=1 00:17:42.858 --rc genhtml_function_coverage=1 00:17:42.858 --rc genhtml_legend=1 00:17:42.858 --rc geninfo_all_blocks=1 00:17:42.858 --rc geninfo_unexecuted_blocks=1 00:17:42.858 00:17:42.858 ' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:42.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.858 --rc genhtml_branch_coverage=1 00:17:42.858 --rc genhtml_function_coverage=1 00:17:42.858 --rc genhtml_legend=1 00:17:42.858 --rc geninfo_all_blocks=1 00:17:42.858 --rc geninfo_unexecuted_blocks=1 00:17:42.858 00:17:42.858 ' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.858 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.858 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:42.859 Cannot find device "nvmf_init_br" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.859 Cannot find device "nvmf_init_br2" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.859 Cannot find device "nvmf_tgt_br" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.859 Cannot find device "nvmf_tgt_br2" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.859 Cannot find device "nvmf_init_br" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.859 Cannot find device "nvmf_init_br2" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.859 Cannot find device "nvmf_tgt_br" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.859 Cannot find device "nvmf_tgt_br2" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.859 Cannot find device "nvmf_br" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.859 Cannot find device "nvmf_init_if" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:42.859 Cannot find device "nvmf_init_if2" 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.859 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:43.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:43.118 00:17:43.118 --- 10.0.0.3 ping statistics --- 00:17:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.118 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:43.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:43.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:43.118 00:17:43.118 --- 10.0.0.4 ping statistics --- 00:17:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.118 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:43.118 00:17:43.118 --- 10.0.0.1 ping statistics --- 00:17:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.118 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:43.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:43.118 00:17:43.118 --- 10.0.0.2 ping statistics --- 00:17:43.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.118 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.118 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:43.119 ************************************ 00:17:43.119 START TEST nvmf_digest_clean 00:17:43.119 ************************************ 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80665 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80665 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80665 ']' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.119 14:23:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.378 [2024-12-10 14:23:07.968490] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:43.378 [2024-12-10 14:23:07.968609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.378 [2024-12-10 14:23:08.121697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.378 [2024-12-10 14:23:08.162254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.378 [2024-12-10 14:23:08.162531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.378 [2024-12-10 14:23:08.162633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.378 [2024-12-10 14:23:08.162729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.378 [2024-12-10 14:23:08.162811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.378 [2024-12-10 14:23:08.163317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.638 [2024-12-10 14:23:08.316473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.638 null0 00:17:43.638 [2024-12-10 14:23:08.357505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.638 [2024-12-10 14:23:08.381545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80690 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80690 /var/tmp/bperf.sock 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80690 ']' 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.638 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.638 [2024-12-10 14:23:08.450410] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:43.638 [2024-12-10 14:23:08.450976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80690 ] 00:17:43.898 [2024-12-10 14:23:08.613032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.898 [2024-12-10 14:23:08.660208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.898 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.898 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:43.898 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:43.898 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:43.898 14:23:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:44.466 [2024-12-10 14:23:09.012491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.466 14:23:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.466 14:23:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.725 nvme0n1 00:17:44.725 14:23:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:44.725 14:23:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.725 Running I/O for 2 seconds... 00:17:47.034 15367.00 IOPS, 60.03 MiB/s [2024-12-10T14:23:11.871Z] 16256.00 IOPS, 63.50 MiB/s 00:17:47.034 Latency(us) 00:17:47.034 [2024-12-10T14:23:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.034 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:47.034 nvme0n1 : 2.01 16289.97 63.63 0.00 0.00 7852.46 6732.33 24665.37 00:17:47.034 [2024-12-10T14:23:11.871Z] =================================================================================================================== 00:17:47.034 [2024-12-10T14:23:11.871Z] Total : 16289.97 63.63 0.00 0.00 7852.46 6732.33 24665.37 00:17:47.034 { 00:17:47.034 "results": [ 00:17:47.034 { 00:17:47.034 "job": "nvme0n1", 00:17:47.034 "core_mask": "0x2", 00:17:47.034 "workload": "randread", 00:17:47.034 "status": "finished", 00:17:47.034 "queue_depth": 128, 00:17:47.034 "io_size": 4096, 00:17:47.034 "runtime": 2.011483, 00:17:47.034 "iops": 16289.971130752783, 00:17:47.034 "mibps": 63.63269972950306, 00:17:47.034 "io_failed": 0, 00:17:47.034 "io_timeout": 0, 00:17:47.034 "avg_latency_us": 7852.456456135192, 00:17:47.034 "min_latency_us": 6732.334545454545, 00:17:47.034 "max_latency_us": 24665.36727272727 00:17:47.034 } 00:17:47.034 ], 00:17:47.034 "core_count": 1 00:17:47.034 } 00:17:47.034 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:47.035 | select(.opcode=="crc32c") 00:17:47.035 | "\(.module_name) \(.executed)"' 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80690 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80690 ']' 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80690 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.035 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80690 00:17:47.294 killing process with pid 80690 00:17:47.294 Received shutdown signal, test time was about 2.000000 seconds 00:17:47.294 00:17:47.294 Latency(us) 00:17:47.294 [2024-12-10T14:23:12.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.294 [2024-12-10T14:23:12.131Z] =================================================================================================================== 00:17:47.294 [2024-12-10T14:23:12.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.294 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:47.294 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:47.294 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80690' 00:17:47.294 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80690 00:17:47.294 14:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80690 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80737 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80737 /var/tmp/bperf.sock 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80737 ']' 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.294 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.294 [2024-12-10 14:23:12.071844] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:47.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:47.294 Zero copy mechanism will not be used. 00:17:47.294 [2024-12-10 14:23:12.072544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80737 ] 00:17:47.553 [2024-12-10 14:23:12.218552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.553 [2024-12-10 14:23:12.246747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.553 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.553 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:47.553 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:47.553 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:47.553 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:47.811 [2024-12-10 14:23:12.549758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.811 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.812 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.378 nvme0n1 00:17:48.378 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:48.378 14:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.378 Zero copy mechanism will not be used. 00:17:48.378 Running I/O for 2 seconds... 00:17:50.257 8560.00 IOPS, 1070.00 MiB/s [2024-12-10T14:23:15.094Z] 8560.00 IOPS, 1070.00 MiB/s 00:17:50.257 Latency(us) 00:17:50.257 [2024-12-10T14:23:15.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.257 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:50.257 nvme0n1 : 2.00 8557.11 1069.64 0.00 0.00 1866.64 1601.16 10426.18 00:17:50.257 [2024-12-10T14:23:15.094Z] =================================================================================================================== 00:17:50.257 [2024-12-10T14:23:15.094Z] Total : 8557.11 1069.64 0.00 0.00 1866.64 1601.16 10426.18 00:17:50.257 { 00:17:50.257 "results": [ 00:17:50.257 { 00:17:50.257 "job": "nvme0n1", 00:17:50.257 "core_mask": "0x2", 00:17:50.257 "workload": "randread", 00:17:50.257 "status": "finished", 00:17:50.257 "queue_depth": 16, 00:17:50.257 "io_size": 131072, 00:17:50.257 "runtime": 2.002546, 00:17:50.257 "iops": 8557.106803039731, 00:17:50.257 "mibps": 1069.6383503799664, 00:17:50.257 "io_failed": 0, 00:17:50.257 "io_timeout": 0, 00:17:50.257 "avg_latency_us": 1866.6398302351242, 00:17:50.257 "min_latency_us": 1601.1636363636364, 00:17:50.257 "max_latency_us": 10426.181818181818 00:17:50.257 } 00:17:50.257 ], 00:17:50.257 "core_count": 1 00:17:50.257 } 00:17:50.257 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:50.257 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:50.257 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.257 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.257 | select(.opcode=="crc32c") 00:17:50.257 | "\(.module_name) \(.executed)"' 00:17:50.257 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80737 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80737 ']' 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80737 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.515 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80737 00:17:50.773 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:50.773 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:50.773 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80737' 00:17:50.773 killing process with pid 80737 00:17:50.773 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80737 00:17:50.773 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.773 00:17:50.773 Latency(us) 00:17:50.773 [2024-12-10T14:23:15.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.773 [2024-12-10T14:23:15.610Z] =================================================================================================================== 00:17:50.773 [2024-12-10T14:23:15.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.773 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80737 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80790 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80790 /var/tmp/bperf.sock 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80790 ']' 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:50.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.774 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.774 [2024-12-10 14:23:15.538569] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:50.774 [2024-12-10 14:23:15.539059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80790 ] 00:17:51.032 [2024-12-10 14:23:15.683517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.032 [2024-12-10 14:23:15.711984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.032 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.032 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:51.032 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:51.032 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:51.032 14:23:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:51.291 [2024-12-10 14:23:16.026142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.291 14:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.291 14:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.549 nvme0n1 00:17:51.549 14:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:51.549 14:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:51.807 Running I/O for 2 seconds... 00:17:53.679 18670.00 IOPS, 72.93 MiB/s [2024-12-10T14:23:18.516Z] 18796.50 IOPS, 73.42 MiB/s 00:17:53.679 Latency(us) 00:17:53.679 [2024-12-10T14:23:18.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.679 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.679 nvme0n1 : 2.01 18823.63 73.53 0.00 0.00 6794.39 2234.18 15192.44 00:17:53.679 [2024-12-10T14:23:18.516Z] =================================================================================================================== 00:17:53.679 [2024-12-10T14:23:18.516Z] Total : 18823.63 73.53 0.00 0.00 6794.39 2234.18 15192.44 00:17:53.679 { 00:17:53.679 "results": [ 00:17:53.679 { 00:17:53.679 "job": "nvme0n1", 00:17:53.679 "core_mask": "0x2", 00:17:53.679 "workload": "randwrite", 00:17:53.679 "status": "finished", 00:17:53.679 "queue_depth": 128, 00:17:53.679 "io_size": 4096, 00:17:53.679 "runtime": 2.010664, 00:17:53.679 "iops": 18823.6323920854, 00:17:53.679 "mibps": 73.5298140315836, 00:17:53.679 "io_failed": 0, 00:17:53.679 "io_timeout": 0, 00:17:53.679 "avg_latency_us": 6794.387477565765, 00:17:53.679 "min_latency_us": 2234.181818181818, 00:17:53.679 "max_latency_us": 15192.436363636363 00:17:53.679 } 00:17:53.679 ], 00:17:53.679 "core_count": 1 00:17:53.679 } 00:17:53.679 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:53.679 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:53.679 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:53.679 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:53.679 | select(.opcode=="crc32c") 00:17:53.679 | "\(.module_name) \(.executed)"' 00:17:53.679 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80790 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80790 ']' 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80790 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.938 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80790 00:17:54.197 killing process with pid 80790 00:17:54.197 Received shutdown signal, test time was about 2.000000 seconds 00:17:54.197 00:17:54.197 Latency(us) 00:17:54.197 [2024-12-10T14:23:19.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.197 [2024-12-10T14:23:19.034Z] =================================================================================================================== 00:17:54.197 [2024-12-10T14:23:19.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80790' 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80790 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80790 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:54.197 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80838 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80838 /var/tmp/bperf.sock 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80838 ']' 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.198 14:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:54.198 [2024-12-10 14:23:18.958005] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:54.198 [2024-12-10 14:23:18.958316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80838 ] 00:17:54.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:54.198 Zero copy mechanism will not be used. 00:17:54.457 [2024-12-10 14:23:19.095011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.457 [2024-12-10 14:23:19.123019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.457 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.457 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:54.457 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:54.457 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:54.457 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:54.716 [2024-12-10 14:23:19.450306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.716 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.716 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.975 nvme0n1 00:17:54.975 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:54.975 14:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.234 Zero copy mechanism will not be used. 00:17:55.234 Running I/O for 2 seconds... 00:17:57.106 6827.00 IOPS, 853.38 MiB/s [2024-12-10T14:23:21.943Z] 6843.00 IOPS, 855.38 MiB/s 00:17:57.106 Latency(us) 00:17:57.106 [2024-12-10T14:23:21.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:57.106 nvme0n1 : 2.00 6841.37 855.17 0.00 0.00 2333.51 1966.08 5540.77 00:17:57.106 [2024-12-10T14:23:21.943Z] =================================================================================================================== 00:17:57.106 [2024-12-10T14:23:21.943Z] Total : 6841.37 855.17 0.00 0.00 2333.51 1966.08 5540.77 00:17:57.106 { 00:17:57.106 "results": [ 00:17:57.106 { 00:17:57.106 "job": "nvme0n1", 00:17:57.106 "core_mask": "0x2", 00:17:57.106 "workload": "randwrite", 00:17:57.106 "status": "finished", 00:17:57.106 "queue_depth": 16, 00:17:57.106 "io_size": 131072, 00:17:57.106 "runtime": 2.002815, 00:17:57.106 "iops": 6841.370770640324, 00:17:57.106 "mibps": 855.1713463300405, 00:17:57.106 "io_failed": 0, 00:17:57.106 "io_timeout": 0, 00:17:57.106 "avg_latency_us": 2333.511217738618, 00:17:57.106 "min_latency_us": 1966.08, 00:17:57.106 "max_latency_us": 5540.770909090909 00:17:57.106 } 00:17:57.106 ], 00:17:57.106 "core_count": 1 00:17:57.106 } 00:17:57.106 14:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:57.106 14:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:57.106 14:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:57.106 14:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:57.106 | select(.opcode=="crc32c") 00:17:57.106 | "\(.module_name) \(.executed)"' 00:17:57.106 14:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80838 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80838 ']' 00:17:57.365 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80838 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80838 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.624 killing process with pid 80838 00:17:57.624 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.624 00:17:57.624 Latency(us) 00:17:57.624 [2024-12-10T14:23:22.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.624 [2024-12-10T14:23:22.461Z] =================================================================================================================== 00:17:57.624 [2024-12-10T14:23:22.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80838' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80838 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80838 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80665 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80665 ']' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80665 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80665 00:17:57.624 killing process with pid 80665 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80665' 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80665 00:17:57.624 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80665 00:17:57.884 ************************************ 00:17:57.884 END TEST nvmf_digest_clean 00:17:57.884 ************************************ 00:17:57.884 00:17:57.884 real 0m14.638s 00:17:57.884 user 0m28.375s 00:17:57.884 sys 0m4.366s 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:57.884 ************************************ 00:17:57.884 START TEST nvmf_digest_error 00:17:57.884 ************************************ 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80914 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80914 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80914 ']' 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.884 14:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.884 [2024-12-10 14:23:22.658688] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:57.884 [2024-12-10 14:23:22.658803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.143 [2024-12-10 14:23:22.802448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.143 [2024-12-10 14:23:22.830999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.143 [2024-12-10 14:23:22.831309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.143 [2024-12-10 14:23:22.831346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.143 [2024-12-10 14:23:22.831354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.143 [2024-12-10 14:23:22.831361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.143 [2024-12-10 14:23:22.831689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.077 [2024-12-10 14:23:23.676141] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.077 [2024-12-10 14:23:23.712506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.077 null0 00:17:59.077 [2024-12-10 14:23:23.747664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.077 [2024-12-10 14:23:23.771765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80946 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80946 /var/tmp/bperf.sock 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80946 ']' 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.077 14:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.077 [2024-12-10 14:23:23.836699] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:17:59.077 [2024-12-10 14:23:23.836794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80946 ] 00:17:59.336 [2024-12-10 14:23:23.990113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.336 [2024-12-10 14:23:24.028568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.336 [2024-12-10 14:23:24.061575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.336 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.336 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:59.336 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.336 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.596 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.855 nvme0n1 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:00.114 14:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:00.114 Running I/O for 2 seconds... 00:18:00.114 [2024-12-10 14:23:24.870976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.871028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.871042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.114 [2024-12-10 14:23:24.886818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.886868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.886880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.114 [2024-12-10 14:23:24.902735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.902782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.902794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.114 [2024-12-10 14:23:24.918135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.918181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.918192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.114 [2024-12-10 14:23:24.933255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.933301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.933313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.114 [2024-12-10 14:23:24.948859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.114 [2024-12-10 14:23:24.948905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.114 [2024-12-10 14:23:24.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:24.965009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:24.965055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:24.965067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:24.980232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:24.980278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:24.980289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:24.995273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:24.995322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:24.995333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.010635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.010692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.025736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.025782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.025793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.040972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.041016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.041028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.056093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.056138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.056149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.071163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.071210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.071221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.086099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.086145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.086156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.101160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.101216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.116465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.116511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.116521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.131548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.131593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.373 [2024-12-10 14:23:25.146546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.373 [2024-12-10 14:23:25.146609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.373 [2024-12-10 14:23:25.146621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.374 [2024-12-10 14:23:25.161689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.374 [2024-12-10 14:23:25.161735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.374 [2024-12-10 14:23:25.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.374 [2024-12-10 14:23:25.176870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.374 [2024-12-10 14:23:25.176916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.374 [2024-12-10 14:23:25.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.374 [2024-12-10 14:23:25.191864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.374 [2024-12-10 14:23:25.191910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.374 [2024-12-10 14:23:25.191920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.374 [2024-12-10 14:23:25.207197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.374 [2024-12-10 14:23:25.207231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.374 [2024-12-10 14:23:25.207244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.223081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.223148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.223177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.238733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.238779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.238791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.255855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.255899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.255909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.272240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.272267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.272279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.287732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.287760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.287771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.303290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.303320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.303331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.319290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.319321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.319333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.336211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.336242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.336254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.632 [2024-12-10 14:23:25.352004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.632 [2024-12-10 14:23:25.352043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-12-10 14:23:25.352055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.368299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.368344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.368370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.386181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.386212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.386224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.403108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.403168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.403180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.418872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.418917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.418928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.433993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.434039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.434050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.449071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.449116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.449127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.633 [2024-12-10 14:23:25.464380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.633 [2024-12-10 14:23:25.464427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.633 [2024-12-10 14:23:25.464439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.480595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.480652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.495769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.495826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.511010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.511082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.511093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.526385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.526441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.543593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.543643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.560980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.561034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.561046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.577513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.577559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.577570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.593282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.593326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.593338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.609104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.609149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.609160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.624865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.624911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.640530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.640585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.656277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.656322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.656333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.671996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.672048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.672059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.687782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.892 [2024-12-10 14:23:25.687827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.892 [2024-12-10 14:23:25.687838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.892 [2024-12-10 14:23:25.703794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.893 [2024-12-10 14:23:25.703839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-12-10 14:23:25.703850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.893 [2024-12-10 14:23:25.719666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:00.893 [2024-12-10 14:23:25.719710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-12-10 14:23:25.719721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.735886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.735931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.735942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.750917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.750979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.766245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.766291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.766302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.781204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.781248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.781259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.796228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.796272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.796283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.811292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.811339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.811351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.826272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.826317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.826328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.842999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.843065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 16066.00 IOPS, 62.76 MiB/s [2024-12-10T14:23:25.989Z] [2024-12-10 14:23:25.869434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.869479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.886012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.886063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.886074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.901747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.901792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.901803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.916877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.916922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.916933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.931864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.931910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.931921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.946757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.946801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.946812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.962804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.962849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.962860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.152 [2024-12-10 14:23:25.978093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.152 [2024-12-10 14:23:25.978137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.152 [2024-12-10 14:23:25.978149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.412 [2024-12-10 14:23:25.994165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.412 [2024-12-10 14:23:25.994211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.412 [2024-12-10 14:23:25.994222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.412 [2024-12-10 14:23:26.009376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.412 [2024-12-10 14:23:26.009422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.412 [2024-12-10 14:23:26.009433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.412 [2024-12-10 14:23:26.024895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.412 [2024-12-10 14:23:26.024954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.412 [2024-12-10 14:23:26.024965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.040162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.040192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.040203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.055295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.055352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.070288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.070334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.070344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.085339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.085383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.085394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.100406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.100451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.100461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.115585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.130426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.130471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.130481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.145360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.145405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.145416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.160480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.160524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.160535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.175588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.175633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.175644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.190451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.190496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.190507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.205433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.205478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.205489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.220575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.220621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.220633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.413 [2024-12-10 14:23:26.235726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.413 [2024-12-10 14:23:26.235771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.413 [2024-12-10 14:23:26.235782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.251792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.251839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.251851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.267204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.267253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.267265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.282204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.282249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.282260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.297152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.297196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.297207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.312255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.312300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.327196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.327242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.342100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.342144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.342155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.357073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.357118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.357129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.372094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.372148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.387178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.387229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.387241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.404187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.404236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.404249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.421952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.422024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.422037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.439669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.439697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.439708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.455966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.456004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.456017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.471599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.471643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.487210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.487242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.487254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.673 [2024-12-10 14:23:26.502721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.673 [2024-12-10 14:23:26.502764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.673 [2024-12-10 14:23:26.502776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.519689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.519717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.519728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.535429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.535486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.535511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.550957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.551009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.551021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.566622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.566650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.566660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.582100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.582127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.582137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.597749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.597796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.597807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.613179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.613224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.613235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.628170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.628214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.628225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.643059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.643111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.643155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.658091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.658146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.673351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.673397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.689117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.689162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.689174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.704589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.932 [2024-12-10 14:23:26.704637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.932 [2024-12-10 14:23:26.704665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.932 [2024-12-10 14:23:26.721251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.933 [2024-12-10 14:23:26.721297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.933 [2024-12-10 14:23:26.721308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.933 [2024-12-10 14:23:26.738486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.933 [2024-12-10 14:23:26.738532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.933 [2024-12-10 14:23:26.738559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.933 [2024-12-10 14:23:26.755847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:01.933 [2024-12-10 14:23:26.755893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.933 [2024-12-10 14:23:26.755920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.191 [2024-12-10 14:23:26.772786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.191 [2024-12-10 14:23:26.772832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.191 [2024-12-10 14:23:26.772843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.191 [2024-12-10 14:23:26.788740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.191 [2024-12-10 14:23:26.788785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.191 [2024-12-10 14:23:26.788796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.191 [2024-12-10 14:23:26.804801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.191 [2024-12-10 14:23:26.804847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.192 [2024-12-10 14:23:26.804859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.192 [2024-12-10 14:23:26.820624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.192 [2024-12-10 14:23:26.820669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.192 [2024-12-10 14:23:26.820680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.192 [2024-12-10 14:23:26.836699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.192 [2024-12-10 14:23:26.836746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.192 [2024-12-10 14:23:26.836757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.192 16129.50 IOPS, 63.01 MiB/s [2024-12-10T14:23:27.029Z] [2024-12-10 14:23:26.853395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b56b50) 00:18:02.192 [2024-12-10 14:23:26.853439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.192 [2024-12-10 14:23:26.853450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.192 00:18:02.192 Latency(us) 00:18:02.192 [2024-12-10T14:23:27.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:02.192 nvme0n1 : 2.01 16146.08 63.07 0.00 0.00 7921.33 7179.17 32410.53 00:18:02.192 [2024-12-10T14:23:27.029Z] =================================================================================================================== 00:18:02.192 [2024-12-10T14:23:27.029Z] Total : 16146.08 63.07 0.00 0.00 7921.33 7179.17 32410.53 00:18:02.192 { 00:18:02.192 "results": [ 00:18:02.192 { 00:18:02.192 "job": "nvme0n1", 00:18:02.192 "core_mask": "0x2", 00:18:02.192 "workload": "randread", 00:18:02.192 "status": "finished", 00:18:02.192 "queue_depth": 128, 00:18:02.192 "io_size": 4096, 00:18:02.192 "runtime": 2.013678, 00:18:02.192 "iops": 16146.076979536947, 00:18:02.192 "mibps": 63.0706132013162, 00:18:02.192 "io_failed": 0, 00:18:02.192 "io_timeout": 0, 00:18:02.192 "avg_latency_us": 7921.3323051199095, 00:18:02.192 "min_latency_us": 7179.170909090909, 00:18:02.192 "max_latency_us": 32410.53090909091 00:18:02.192 } 00:18:02.192 ], 00:18:02.192 "core_count": 1 00:18:02.192 } 00:18:02.192 14:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:02.192 14:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:02.192 14:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:02.192 14:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:02.192 | .driver_specific 00:18:02.192 | .nvme_error 00:18:02.192 | .status_code 00:18:02.192 | .command_transient_transport_error' 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80946 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80946 ']' 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80946 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80946 00:18:02.451 killing process with pid 80946 00:18:02.451 Received shutdown signal, test time was about 2.000000 seconds 00:18:02.451 00:18:02.451 Latency(us) 00:18:02.451 [2024-12-10T14:23:27.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.451 [2024-12-10T14:23:27.288Z] =================================================================================================================== 00:18:02.451 [2024-12-10T14:23:27.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80946' 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80946 00:18:02.451 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80946 00:18:02.710 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80993 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80993 /var/tmp/bperf.sock 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80993 ']' 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.711 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.711 Zero copy mechanism will not be used. 00:18:02.711 [2024-12-10 14:23:27.383642] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:18:02.711 [2024-12-10 14:23:27.383743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80993 ] 00:18:02.711 [2024-12-10 14:23:27.528562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.970 [2024-12-10 14:23:27.558896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.970 [2024-12-10 14:23:27.586545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.970 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.970 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:02.970 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.970 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.228 14:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.487 nvme0n1 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:03.487 14:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.748 Zero copy mechanism will not be used. 00:18:03.748 Running I/O for 2 seconds... 00:18:03.748 [2024-12-10 14:23:28.331704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.331761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.331774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.335796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.335843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.335855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.339926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.339983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.339996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.343949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.344006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.344018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.348004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.348061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.348073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.352086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.352143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.356143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.356184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.356196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.360138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.360182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.360194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.364246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.364291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.364302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.368249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.368294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.368305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.372253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.372299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.372310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.376242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.376287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.376298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.380393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.380438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.380449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.384463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.384509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.384520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.388642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.748 [2024-12-10 14:23:28.388688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.748 [2024-12-10 14:23:28.388699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.748 [2024-12-10 14:23:28.392787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.392834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.392845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.396790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.396837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.396848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.400912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.400959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.404883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.404930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.404956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.408831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.408877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.408888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.412866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.412912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.417019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.417064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.417074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.420863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.420908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.420919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.424874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.424920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.424946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.428901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.428946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.428972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.433022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.433078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.433089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.437406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.437452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.441780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.441826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.441838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.446423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.446471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.446482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.450995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.451039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.451052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.455830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.455879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.455905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.460526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.460589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.460600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.465059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.465118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.465131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.469571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.469617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.469628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.473928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.473998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.474011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.478414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.478460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.482709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.482754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.482765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.486889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.486947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.490945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.490997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.491009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.494921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.494976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.494988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.498943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.498996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.749 [2024-12-10 14:23:28.502904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.749 [2024-12-10 14:23:28.502950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.749 [2024-12-10 14:23:28.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.506908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.506955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.506965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.510956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.511010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.511022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.515088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.515156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.515170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.519178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.519223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.519235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.523045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.523089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.523099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.526980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.527024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.527035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.531016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.531060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.534893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.534939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.534950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.538815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.538860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.538870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.542839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.542884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.542895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.546891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.546935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.550955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.551008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.551019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.555342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.555375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.555388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.559534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.559578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.559589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.563547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.563592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.563603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.567661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.567706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.571732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.571777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.571788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.575751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.575796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.575807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.750 [2024-12-10 14:23:28.579996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:03.750 [2024-12-10 14:23:28.580050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.750 [2024-12-10 14:23:28.580061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.584640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.584689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.584700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.589237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.589283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.589295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.593399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.593445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.593456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.597557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.597603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.597614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.601721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.601766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.601777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.605816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.605863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.605873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.609861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.609907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.609918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.613832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.613879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.613890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.617876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.617921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.617947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.621950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.622005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.622016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.625975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.626030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.626042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.629850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.629895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.629906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.633917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.633990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.634001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.637872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.637918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.637944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.011 [2024-12-10 14:23:28.641877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.011 [2024-12-10 14:23:28.641922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.011 [2024-12-10 14:23:28.641948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.645951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.646006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.646017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.649856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.649902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.649913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.653903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.653976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.657988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.658043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.658054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.662024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.662069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.662079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.666114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.666169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.670159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.670215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.674148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.674194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.674204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.678152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.678197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.678207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.682130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.682174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.682185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.686208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.686252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.686262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.690476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.690521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.690532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.695183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.695218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.695230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.699228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.699262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.699274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.703401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.703449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.703491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.707618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.707663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.707674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.711773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.711818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.711829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.715913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.715959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.715980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.720010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.720065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.724106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.724151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.724161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.728102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.728145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.732162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.732206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.732217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.736210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.736255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.736265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.740097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.740140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.744141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.744170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.744180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.748109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.748154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.748164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.752133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.752178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.012 [2024-12-10 14:23:28.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.012 [2024-12-10 14:23:28.756219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.012 [2024-12-10 14:23:28.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.756274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.760232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.760277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.760288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.764172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.764216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.768214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.768258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.768269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.772335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.772382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.772392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.776406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.776451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.780514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.780558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.785032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.785080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.785092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.789384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.789428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.793874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.793961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.798348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.798376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.798387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.802839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.802866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.807318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.807349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.807361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.811768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.811807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.815995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.816034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.816045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.820091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.820118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.820129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.824126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.824152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.824162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.828428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.828455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.828466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.832474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.832502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.832512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.836576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.836604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.836614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.840660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.840688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.840699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.013 [2024-12-10 14:23:28.845222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.013 [2024-12-10 14:23:28.845250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.013 [2024-12-10 14:23:28.845260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.849446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.849473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.849483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.853862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.853890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.853901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.857906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.857949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.857969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.862218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.862246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.862256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.866285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.866312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.866322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.870276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.870304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.870315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.874509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.874538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.874548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.878783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.878811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.878822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.882865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.882904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.886938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.886991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.887003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.890989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.891016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.891027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.895223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.895253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.895266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.899769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.899798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.899809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.904422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.904468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.909136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.909163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.909174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.913980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.914016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.275 [2024-12-10 14:23:28.914028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.275 [2024-12-10 14:23:28.918513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.275 [2024-12-10 14:23:28.918574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.918587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.923031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.927628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.927673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.927684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.932334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.932376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.932386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.936753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.936779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.936790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.940783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.940810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.944940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.944976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.944987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.949184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.949211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.949221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.953226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.953253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.953264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.957392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.957420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.957430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.961400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.961428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.965850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.965878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.965889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.969947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.969985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.974010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.974036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.978080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.978106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.978116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.982376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.982423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.982434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.986474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.986522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.986550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.990630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.990679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.990707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.994617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.994666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.994694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:28.998553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:28.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:28.998630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.002611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.002660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.002687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.006601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.006650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.010581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.010630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.010658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.014908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.014981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.014995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.019065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.019139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.019168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.276 [2024-12-10 14:23:29.023239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.276 [2024-12-10 14:23:29.023293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.276 [2024-12-10 14:23:29.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.027258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.027310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.027338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.031386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.031455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.031488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.035349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.035386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.035415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.039332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.039385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.039414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.043704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.043754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.043782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.048124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.048173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.048200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.052331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.052396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.052422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.056999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.057060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.057088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.061501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.061569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.061598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.066032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.066118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.070388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.070437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.070465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.074739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.074791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.074819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.079113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.079167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.079180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.083375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.083413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.083427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.087687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.087736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.087764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.091773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.091822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.091850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.096002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.096061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.096090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.100347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.100428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.277 [2024-12-10 14:23:29.104981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.277 [2024-12-10 14:23:29.105043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.277 [2024-12-10 14:23:29.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.109547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.109597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.109624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.113843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.113892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.113919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.118597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.118646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.118674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.122809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.122845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.122873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.126900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.126988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.127000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.130909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.130996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.131008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.135092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.135150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.135163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.139191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.139230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.139243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.143261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.143297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.143327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.147411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.147491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.147505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.151891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.151941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.151994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.156058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.156136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.160195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.160244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.164306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.164356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.164383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.539 [2024-12-10 14:23:29.168641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.539 [2024-12-10 14:23:29.168691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.539 [2024-12-10 14:23:29.168718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.172729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.172778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.176918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.176991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.177005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.180990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.181038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.181065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.185263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.185312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.189415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.189463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.189490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.193518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.193567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.193594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.197847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.197896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.197924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.202092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.202140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.206164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.206213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.206241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.210594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.210644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.210671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.214598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.214647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.214674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.218736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.218786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.218814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.223181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.223216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.223245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.227224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.227278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.227291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.231355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.231391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.231420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.235456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.235535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.235562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.239631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.239680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.239707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.243680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.243729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.243757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.247828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.247878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.252002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.252060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.252088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.255962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.256034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.256062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.259927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.259998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.260027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.263999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.264085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.268027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.268084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.540 [2024-12-10 14:23:29.268112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.540 [2024-12-10 14:23:29.272009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.540 [2024-12-10 14:23:29.272066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.272094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.275900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.276005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.276018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.279925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.280013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.280026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.283886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.283976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.284005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.288476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.288528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.288557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.293102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.293152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.297857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.297911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.297925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.302300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.302349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.306456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.306533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.310576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.310626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.310653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.314720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.314768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.314796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.318896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.318947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.318985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.323726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.323796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.323809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.328458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.328512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.328526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 7393.00 IOPS, 924.12 MiB/s [2024-12-10T14:23:29.378Z] [2024-12-10 14:23:29.334208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.334258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.334287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.338967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.339027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.339056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.343588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.343639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.343668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.347868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.347918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.347945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.352688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.352740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.352770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.357295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.357343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.357370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.361903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.361992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.362020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.366378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.366427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.366454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.541 [2024-12-10 14:23:29.371135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.541 [2024-12-10 14:23:29.371189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.541 [2024-12-10 14:23:29.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.802 [2024-12-10 14:23:29.376009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.802 [2024-12-10 14:23:29.376067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.802 [2024-12-10 14:23:29.376095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.802 [2024-12-10 14:23:29.380701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.802 [2024-12-10 14:23:29.380771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.802 [2024-12-10 14:23:29.380784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.802 [2024-12-10 14:23:29.385152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.802 [2024-12-10 14:23:29.385200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.802 [2024-12-10 14:23:29.385228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.802 [2024-12-10 14:23:29.389497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.802 [2024-12-10 14:23:29.389562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.393794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.393870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.397885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.397935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.397962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.401992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.402041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.402068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.406042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.406090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.406117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.410054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.410102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.410129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.414082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.414129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.414157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.418196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.418245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.418272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.422259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.422308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.422336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.426200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.426248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.426275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.430226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.430275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.430302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.434164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.434212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.434239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.438164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.438212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.438239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.442259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.442307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.442335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.446264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.446313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.446339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.450295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.450344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.450371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.454284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.454333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.454360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.458327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.458392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.458419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.462755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.462804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.467189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.467242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.467271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.471725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.471775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.471803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.476307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.476392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.476420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.481156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.481192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.481221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.485794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.485842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.485869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.490250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.490318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.490346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.494539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.494587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.494614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.498868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.498918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.498945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.503364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.503417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.503461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.507552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.507616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.507643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.511556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.511604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.511631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.515572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.515619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.515646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.519635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.519682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.519709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.523744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.523791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.523818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.527842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.527889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.531884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.531932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.803 [2024-12-10 14:23:29.531959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.803 [2024-12-10 14:23:29.535946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.803 [2024-12-10 14:23:29.536002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.536030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.539889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.539938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.539965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.543837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.543884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.543911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.547913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.547985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.547998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.551867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.551915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.551943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.555940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.555997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.556024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.559977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.560062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.563928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.564012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.567971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.568055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.571944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.576004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.576061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.576089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.580046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.580094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.580121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.584170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.584218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.584245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.588210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.588259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.588286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.592201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.592247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.592274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.596151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.596198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.596226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.600173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.600221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.600248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.604312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.604388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.604415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.608408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.608457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.608484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.612396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.612444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.612471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.616423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.616473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.616516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.620503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.620568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.620595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.624448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.624496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.624539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.628654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.628703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.628731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.804 [2024-12-10 14:23:29.633041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:04.804 [2024-12-10 14:23:29.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.804 [2024-12-10 14:23:29.633128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.637579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.637629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.637657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.641765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.641830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.641857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.646065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.646113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.646141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.650150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.650199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.650227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.654163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.654212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.654239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.658328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.658392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.658419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.662410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.662459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.662486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.666455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.666503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.670590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.670639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.670667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.674710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.674744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.674771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.678890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.678924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.678951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.682929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.682991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.683020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.686977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.687021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.687048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.690938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.691012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.691041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.694922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.694995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.695008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.698947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.699007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.699034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.702995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.703053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.703080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.707039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.065 [2024-12-10 14:23:29.707087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.065 [2024-12-10 14:23:29.707121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.065 [2024-12-10 14:23:29.711032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.711082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.711133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.715166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.715202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.715231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.719265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.719317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.719345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.723213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.723263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.723292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.727296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.727331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.727359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.731227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.731278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.731306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.735224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.735274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.735302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.739208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.739259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.743096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.743169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.743182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.747086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.747157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.747170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.750978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.751035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.754986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.755046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.755074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.759007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.759040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.759067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.762978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.763036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.763063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.767026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.767074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.767101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.771016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.771063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.771090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.775005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.779007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.779065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.779092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.782965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.783024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.787054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.787137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.791014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.791062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.791089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.794966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.795010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.799041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.799074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.799101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.802957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.803016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.806876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.066 [2024-12-10 14:23:29.806925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.066 [2024-12-10 14:23:29.806951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.066 [2024-12-10 14:23:29.810866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.810900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.810927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.814958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.815019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.815047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.819077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.819148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.819191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.823181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.823216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.823244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.827164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.827215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.827243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.831205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.831239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.831267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.835361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.835412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.839397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.839464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.839492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.843407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.843474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.843501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.847545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.847593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.847619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.851612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.851660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.851687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.856063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.856122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.856151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.860652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.860704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.865151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.865199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.865227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.869700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.869753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.869781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.874468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.874518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.874563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.879602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.879655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.879683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.884450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.884489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.884502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.889143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.889180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.889193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.893836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.893889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.067 [2024-12-10 14:23:29.898558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.067 [2024-12-10 14:23:29.898610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.067 [2024-12-10 14:23:29.898638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.328 [2024-12-10 14:23:29.902945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.328 [2024-12-10 14:23:29.903005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.328 [2024-12-10 14:23:29.903033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.328 [2024-12-10 14:23:29.907315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.328 [2024-12-10 14:23:29.907353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.328 [2024-12-10 14:23:29.907373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.328 [2024-12-10 14:23:29.911633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.328 [2024-12-10 14:23:29.911682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.328 [2024-12-10 14:23:29.911709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.328 [2024-12-10 14:23:29.915846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.915912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.920488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.920527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.925235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.925285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.925314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.929797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.929836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.929848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.934364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.934402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.934416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.938870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.938922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.943559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.943598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.943611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.948243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.948292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.948320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.952810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.952849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.952862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.957448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.957499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.957527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.962044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.962102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.962131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.966540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.966592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.966604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.970900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.970949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.971023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.975049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.975097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.979275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.979311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.979340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.983646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.983680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.983692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.988414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.988465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.988493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.993003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.993065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.993078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:29.997645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:29.997695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:29.997707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.002222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.002273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.006861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.006940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.011567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.011616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.011643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.016021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.016097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.020554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.020590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.020603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.329 [2024-12-10 14:23:30.024993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.329 [2024-12-10 14:23:30.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.329 [2024-12-10 14:23:30.025042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.029690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.029741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.029753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.034135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.034184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.034197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.038492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.038540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.038552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.042735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.042785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.047027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.047076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.047087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.051312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.051348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.051361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.055992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.056027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.056040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.060565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.060616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.060629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.065122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.065166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.069345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.069394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.069405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.073460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.073508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.073535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.078002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.078076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.078090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.082274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.082307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.082319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.086470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.086518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.086530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.090665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.090725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.094984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.095040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.095051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.099060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.099114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.099158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.103267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.103302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.103314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.107341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.107376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.107388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.111498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.111545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.115632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.115679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.119900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.119948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.119971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.123950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.124006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.124018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.128074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.128126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.128138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.132119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.330 [2024-12-10 14:23:30.132166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.330 [2024-12-10 14:23:30.132177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.330 [2024-12-10 14:23:30.136171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.136218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.136230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.140236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.140295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.144412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.144446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.144458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.148544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.148592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.148603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.152801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.152849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.152860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.157157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.157190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.157201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.331 [2024-12-10 14:23:30.161657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.331 [2024-12-10 14:23:30.161693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.331 [2024-12-10 14:23:30.161706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.591 [2024-12-10 14:23:30.166060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.591 [2024-12-10 14:23:30.166108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.591 [2024-12-10 14:23:30.166120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.591 [2024-12-10 14:23:30.170512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.591 [2024-12-10 14:23:30.170576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.591 [2024-12-10 14:23:30.170587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.591 [2024-12-10 14:23:30.174487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.591 [2024-12-10 14:23:30.174536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.591 [2024-12-10 14:23:30.174547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.591 [2024-12-10 14:23:30.178724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.591 [2024-12-10 14:23:30.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.182818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.182867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.182895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.186901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.186949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.187005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.190898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.190946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.190983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.194826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.194874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.194902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.198791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.198840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.198867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.202865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.202915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.202943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.206784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.206833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.206861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.210766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.210815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.210843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.214809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.214857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.218939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.218995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.219023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.222903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.222977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.222990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.227354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.227392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.227405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.231679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.231729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.231757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.235807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.235856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.235884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.240392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.240443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.240471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.245202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.245252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.245280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.250161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.250209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.250237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.254483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.254548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.254577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.258820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.258870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.263055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.263127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.263156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.267363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.267401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.267414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.271730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.271779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.271807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.275921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.275994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.276024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.592 [2024-12-10 14:23:30.279986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.592 [2024-12-10 14:23:30.280045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.592 [2024-12-10 14:23:30.280073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.284200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.284249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.284278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.288430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.288480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.288508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.292737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.292788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.292816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.296950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.297008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.297037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.301309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.301385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.305577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.305626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.305654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.309790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.309839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.309866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.313880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.313928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.313956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.318396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.318460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.318487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.322516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.322565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.322592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.593 [2024-12-10 14:23:30.326680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.326728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.326755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.593 7347.00 IOPS, 918.38 MiB/s [2024-12-10T14:23:30.430Z] [2024-12-10 14:23:30.331804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1675620) 00:18:05.593 [2024-12-10 14:23:30.331853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.593 [2024-12-10 14:23:30.331881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.593 00:18:05.593 Latency(us) 00:18:05.593 [2024-12-10T14:23:30.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.593 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:05.593 nvme0n1 : 2.00 7343.38 917.92 0.00 0.00 2175.24 1779.90 6076.97 00:18:05.593 [2024-12-10T14:23:30.430Z] =================================================================================================================== 00:18:05.593 [2024-12-10T14:23:30.430Z] Total : 7343.38 917.92 0.00 0.00 2175.24 1779.90 6076.97 00:18:05.593 { 00:18:05.593 "results": [ 00:18:05.593 { 00:18:05.593 "job": "nvme0n1", 00:18:05.593 "core_mask": "0x2", 00:18:05.593 "workload": "randread", 00:18:05.593 "status": "finished", 00:18:05.593 "queue_depth": 16, 00:18:05.593 "io_size": 131072, 00:18:05.593 "runtime": 2.003165, 00:18:05.593 "iops": 7343.379102570183, 00:18:05.593 "mibps": 917.9223878212729, 00:18:05.593 "io_failed": 0, 00:18:05.593 "io_timeout": 0, 00:18:05.593 "avg_latency_us": 2175.2445777146036, 00:18:05.593 "min_latency_us": 1779.898181818182, 00:18:05.593 "max_latency_us": 6076.9745454545455 00:18:05.593 } 00:18:05.593 ], 00:18:05.593 "core_count": 1 00:18:05.593 } 00:18:05.593 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.593 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.593 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.593 | .driver_specific 00:18:05.593 | .nvme_error 00:18:05.593 | .status_code 00:18:05.593 | .command_transient_transport_error' 00:18:05.593 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80993 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80993 ']' 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80993 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80993 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.853 killing process with pid 80993 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80993' 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80993 00:18:05.853 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.853 00:18:05.853 Latency(us) 00:18:05.853 [2024-12-10T14:23:30.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.853 [2024-12-10T14:23:30.690Z] =================================================================================================================== 00:18:05.853 [2024-12-10T14:23:30.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.853 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80993 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81046 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81046 /var/tmp/bperf.sock 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81046 ']' 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.177 14:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.177 [2024-12-10 14:23:30.847020] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:18:06.177 [2024-12-10 14:23:30.847141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81046 ] 00:18:06.436 [2024-12-10 14:23:30.992377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.436 [2024-12-10 14:23:31.021895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.436 [2024-12-10 14:23:31.049522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.436 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.436 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:06.436 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.436 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.695 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:06.695 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.695 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.696 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.696 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.954 nvme0n1 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:06.954 14:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.214 Running I/O for 2 seconds... 00:18:07.214 [2024-12-10 14:23:31.818354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb048 00:18:07.214 [2024-12-10 14:23:31.819933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.820012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.833162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb8b8 00:18:07.214 [2024-12-10 14:23:31.834545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.834594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.847548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc128 00:18:07.214 [2024-12-10 14:23:31.848898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.848945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.862023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc998 00:18:07.214 [2024-12-10 14:23:31.863480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.876494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efd208 00:18:07.214 [2024-12-10 14:23:31.877808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.877854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.890747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efda78 00:18:07.214 [2024-12-10 14:23:31.892183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.892214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.905081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efe2e8 00:18:07.214 [2024-12-10 14:23:31.906374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.906421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.919305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efeb58 00:18:07.214 [2024-12-10 14:23:31.920627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.920659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.939453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efef90 00:18:07.214 [2024-12-10 14:23:31.941917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.941951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.953757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efeb58 00:18:07.214 [2024-12-10 14:23:31.956371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.956399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.969410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efe2e8 00:18:07.214 [2024-12-10 14:23:31.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.214 [2024-12-10 14:23:31.972381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:07.214 [2024-12-10 14:23:31.986588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efda78 00:18:07.214 [2024-12-10 14:23:31.989308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.215 [2024-12-10 14:23:31.989334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:07.215 [2024-12-10 14:23:32.002238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efd208 00:18:07.215 [2024-12-10 14:23:32.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.215 [2024-12-10 14:23:32.004676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:07.215 [2024-12-10 14:23:32.016670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc998 00:18:07.215 [2024-12-10 14:23:32.018971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.215 [2024-12-10 14:23:32.019027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:07.215 [2024-12-10 14:23:32.031095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc128 00:18:07.215 [2024-12-10 14:23:32.033712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.215 [2024-12-10 14:23:32.033743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:07.215 [2024-12-10 14:23:32.045755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb8b8 00:18:07.474 [2024-12-10 14:23:32.048482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.048670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.061432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb048 00:18:07.474 [2024-12-10 14:23:32.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.063944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.075925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efa7d8 00:18:07.474 [2024-12-10 14:23:32.078478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.078511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.090539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef9f68 00:18:07.474 [2024-12-10 14:23:32.092792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.092822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.104902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef96f8 00:18:07.474 [2024-12-10 14:23:32.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.107337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.119357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef8e88 00:18:07.474 [2024-12-10 14:23:32.121797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.134014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef8618 00:18:07.474 [2024-12-10 14:23:32.136509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.136542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.148590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef7da8 00:18:07.474 [2024-12-10 14:23:32.150774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.150804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.163016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef7538 00:18:07.474 [2024-12-10 14:23:32.165207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.165390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.178517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef6cc8 00:18:07.474 [2024-12-10 14:23:32.180689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.474 [2024-12-10 14:23:32.180719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:07.474 [2024-12-10 14:23:32.192948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef6458 00:18:07.474 [2024-12-10 14:23:32.195046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.195103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.207469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef5be8 00:18:07.475 [2024-12-10 14:23:32.209624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.209653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.222039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef5378 00:18:07.475 [2024-12-10 14:23:32.224432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.224465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.236793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef4b08 00:18:07.475 [2024-12-10 14:23:32.238855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.238884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.251211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef4298 00:18:07.475 [2024-12-10 14:23:32.253215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.253392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.265691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef3a28 00:18:07.475 [2024-12-10 14:23:32.267818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.267849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.280451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef31b8 00:18:07.475 [2024-12-10 14:23:32.282506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.282539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.475 [2024-12-10 14:23:32.295282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef2948 00:18:07.475 [2024-12-10 14:23:32.297318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.475 [2024-12-10 14:23:32.297350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.734 [2024-12-10 14:23:32.310435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef20d8 00:18:07.734 [2024-12-10 14:23:32.312458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.734 [2024-12-10 14:23:32.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:07.734 [2024-12-10 14:23:32.325312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef1868 00:18:07.734 [2024-12-10 14:23:32.327259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.734 [2024-12-10 14:23:32.327297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:07.734 [2024-12-10 14:23:32.339643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef0ff8 00:18:07.734 [2024-12-10 14:23:32.341815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.734 [2024-12-10 14:23:32.341847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:07.734 [2024-12-10 14:23:32.354246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef0788 00:18:07.734 [2024-12-10 14:23:32.356235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.356267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.369558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeff18 00:18:07.735 [2024-12-10 14:23:32.371604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.371787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.386236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eef6a8 00:18:07.735 [2024-12-10 14:23:32.388523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.388552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.401990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeee38 00:18:07.735 [2024-12-10 14:23:32.404393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.417394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eee5c8 00:18:07.735 [2024-12-10 14:23:32.419642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.419824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.432590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eedd58 00:18:07.735 [2024-12-10 14:23:32.434732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.434908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.448017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eed4e8 00:18:07.735 [2024-12-10 14:23:32.450071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.450270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.463037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eecc78 00:18:07.735 [2024-12-10 14:23:32.465081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.478311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eec408 00:18:07.735 [2024-12-10 14:23:32.480218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.480251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.492947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eebb98 00:18:07.735 [2024-12-10 14:23:32.494715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.494761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.507744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeb328 00:18:07.735 [2024-12-10 14:23:32.509457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.509502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.522568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeaab8 00:18:07.735 [2024-12-10 14:23:32.524401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.524449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.538236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eea248 00:18:07.735 [2024-12-10 14:23:32.540133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.540164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.735 [2024-12-10 14:23:32.555039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee99d8 00:18:07.735 [2024-12-10 14:23:32.556872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.735 [2024-12-10 14:23:32.556916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.571809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee9168 00:18:07.995 [2024-12-10 14:23:32.573780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.573828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.587346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee88f8 00:18:07.995 [2024-12-10 14:23:32.589146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.589178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.603658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee8088 00:18:07.995 [2024-12-10 14:23:32.605487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.605553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.619542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee7818 00:18:07.995 [2024-12-10 14:23:32.621274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.621306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.634890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee6fa8 00:18:07.995 [2024-12-10 14:23:32.636619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.636665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.649736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee6738 00:18:07.995 [2024-12-10 14:23:32.651485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.651531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.664787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee5ec8 00:18:07.995 [2024-12-10 14:23:32.666424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.666470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.679773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee5658 00:18:07.995 [2024-12-10 14:23:32.681476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.694741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee4de8 00:18:07.995 [2024-12-10 14:23:32.696354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.696399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.709644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee4578 00:18:07.995 [2024-12-10 14:23:32.711250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.711284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.724885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee3d08 00:18:07.995 [2024-12-10 14:23:32.726466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.726513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.739785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee3498 00:18:07.995 [2024-12-10 14:23:32.741352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.741398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.754757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee2c28 00:18:07.995 [2024-12-10 14:23:32.756356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.756402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.769734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee23b8 00:18:07.995 [2024-12-10 14:23:32.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.771285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.784969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee1b48 00:18:07.995 [2024-12-10 14:23:32.786465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.786512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.799245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee12d8 00:18:07.995 [2024-12-10 14:23:32.800693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.800745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:07.995 16826.00 IOPS, 65.73 MiB/s [2024-12-10T14:23:32.832Z] [2024-12-10 14:23:32.814529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee0a68 00:18:07.995 [2024-12-10 14:23:32.815971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.995 [2024-12-10 14:23:32.816007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:07.995 [2024-12-10 14:23:32.829268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee01f8 00:18:08.255 [2024-12-10 14:23:32.830703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.830767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.844283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016edf988 00:18:08.255 [2024-12-10 14:23:32.845619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.858608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016edf118 00:18:08.255 [2024-12-10 14:23:32.859968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.860043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.873250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ede8a8 00:18:08.255 [2024-12-10 14:23:32.874534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.874580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.887508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ede038 00:18:08.255 [2024-12-10 14:23:32.888809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.888856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.907492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ede038 00:18:08.255 [2024-12-10 14:23:32.909879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.909926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.921765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ede8a8 00:18:08.255 [2024-12-10 14:23:32.924301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.924332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.936117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016edf118 00:18:08.255 [2024-12-10 14:23:32.938447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.938495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.950219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016edf988 00:18:08.255 [2024-12-10 14:23:32.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.952590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.964408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee01f8 00:18:08.255 [2024-12-10 14:23:32.966741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.966785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.978800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee0a68 00:18:08.255 [2024-12-10 14:23:32.981208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.981254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:32.994984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee12d8 00:18:08.255 [2024-12-10 14:23:32.997791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:32.997843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:33.012058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee1b48 00:18:08.255 [2024-12-10 14:23:33.014457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:33.014504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:33.028060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee23b8 00:18:08.255 [2024-12-10 14:23:33.030433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:33.030480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:33.042839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee2c28 00:18:08.255 [2024-12-10 14:23:33.045189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:33.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:33.057129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee3498 00:18:08.255 [2024-12-10 14:23:33.059359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.255 [2024-12-10 14:23:33.059393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.255 [2024-12-10 14:23:33.071299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee3d08 00:18:08.256 [2024-12-10 14:23:33.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.256 [2024-12-10 14:23:33.073497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.256 [2024-12-10 14:23:33.085417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee4578 00:18:08.256 [2024-12-10 14:23:33.087825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.256 [2024-12-10 14:23:33.087871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.515 [2024-12-10 14:23:33.100670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee4de8 00:18:08.515 [2024-12-10 14:23:33.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.515 [2024-12-10 14:23:33.102887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.515 [2024-12-10 14:23:33.114991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee5658 00:18:08.516 [2024-12-10 14:23:33.117191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.117222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.129361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee5ec8 00:18:08.516 [2024-12-10 14:23:33.131553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.131599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.143736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee6738 00:18:08.516 [2024-12-10 14:23:33.145875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.145920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.158273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee6fa8 00:18:08.516 [2024-12-10 14:23:33.160378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.160423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.172625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee7818 00:18:08.516 [2024-12-10 14:23:33.174720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.174768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.187011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee8088 00:18:08.516 [2024-12-10 14:23:33.189150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.189195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.201389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee88f8 00:18:08.516 [2024-12-10 14:23:33.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.203581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.215835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee9168 00:18:08.516 [2024-12-10 14:23:33.217857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.230197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ee99d8 00:18:08.516 [2024-12-10 14:23:33.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.244552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eea248 00:18:08.516 [2024-12-10 14:23:33.246532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.246577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.258744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeaab8 00:18:08.516 [2024-12-10 14:23:33.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.260876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.273198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeb328 00:18:08.516 [2024-12-10 14:23:33.275133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.275164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.287371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eebb98 00:18:08.516 [2024-12-10 14:23:33.289375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.289421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.301621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eec408 00:18:08.516 [2024-12-10 14:23:33.303726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.303771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.316264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eecc78 00:18:08.516 [2024-12-10 14:23:33.318163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.318193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.330570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eed4e8 00:18:08.516 [2024-12-10 14:23:33.332544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.332589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.516 [2024-12-10 14:23:33.344858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eedd58 00:18:08.516 [2024-12-10 14:23:33.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.516 [2024-12-10 14:23:33.346819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.360253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eee5c8 00:18:08.776 [2024-12-10 14:23:33.362094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.362143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.374579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeee38 00:18:08.776 [2024-12-10 14:23:33.376553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.389036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eef6a8 00:18:08.776 [2024-12-10 14:23:33.390773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.390818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.403295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016eeff18 00:18:08.776 [2024-12-10 14:23:33.405106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.405153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.417420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef0788 00:18:08.776 [2024-12-10 14:23:33.419190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.419223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.431683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef0ff8 00:18:08.776 [2024-12-10 14:23:33.433461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.445872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef1868 00:18:08.776 [2024-12-10 14:23:33.447670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.447715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.460209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef20d8 00:18:08.776 [2024-12-10 14:23:33.461897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.776 [2024-12-10 14:23:33.461942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.776 [2024-12-10 14:23:33.474398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef2948 00:18:08.776 [2024-12-10 14:23:33.476180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.476211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.488970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef31b8 00:18:08.777 [2024-12-10 14:23:33.490631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.490678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.503900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef3a28 00:18:08.777 [2024-12-10 14:23:33.505702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.505749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.518369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef4298 00:18:08.777 [2024-12-10 14:23:33.520100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.520145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.532865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef4b08 00:18:08.777 [2024-12-10 14:23:33.534494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.534540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.547678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef5378 00:18:08.777 [2024-12-10 14:23:33.549467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.549514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.564815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef5be8 00:18:08.777 [2024-12-10 14:23:33.566646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.566677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.581715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef6458 00:18:08.777 [2024-12-10 14:23:33.583528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.583574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.777 [2024-12-10 14:23:33.597858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef6cc8 00:18:08.777 [2024-12-10 14:23:33.599631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.777 [2024-12-10 14:23:33.599677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.613730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef7538 00:18:09.037 [2024-12-10 14:23:33.615595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.629112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef7da8 00:18:09.037 [2024-12-10 14:23:33.630635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.630683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.643878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef8618 00:18:09.037 [2024-12-10 14:23:33.645461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.645507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.658610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef8e88 00:18:09.037 [2024-12-10 14:23:33.660230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.660261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.673570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef96f8 00:18:09.037 [2024-12-10 14:23:33.675060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.675092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.688701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016ef9f68 00:18:09.037 [2024-12-10 14:23:33.690238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.690268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.703835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efa7d8 00:18:09.037 [2024-12-10 14:23:33.705386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.718930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb048 00:18:09.037 [2024-12-10 14:23:33.720456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.720488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.733645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efb8b8 00:18:09.037 [2024-12-10 14:23:33.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.735086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.747790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc128 00:18:09.037 [2024-12-10 14:23:33.749218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.762213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efc998 00:18:09.037 [2024-12-10 14:23:33.763616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.763662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.776709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efd208 00:18:09.037 [2024-12-10 14:23:33.778217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.778249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:09.037 [2024-12-10 14:23:33.792760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efda78 00:18:09.037 [2024-12-10 14:23:33.794223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:09.037 17015.50 IOPS, 66.47 MiB/s [2024-12-10T14:23:33.874Z] [2024-12-10 14:23:33.808542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416b70) with pdu=0x200016efe2e8 00:18:09.037 [2024-12-10 14:23:33.809947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.037 [2024-12-10 14:23:33.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:09.037 00:18:09.037 Latency(us) 00:18:09.037 [2024-12-10T14:23:33.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.037 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.037 nvme0n1 : 2.01 17031.47 66.53 0.00 0.00 7508.42 2263.97 27286.81 00:18:09.037 [2024-12-10T14:23:33.874Z] =================================================================================================================== 00:18:09.037 [2024-12-10T14:23:33.874Z] Total : 17031.47 66.53 0.00 0.00 7508.42 2263.97 27286.81 00:18:09.037 { 00:18:09.037 "results": [ 00:18:09.037 { 00:18:09.037 "job": "nvme0n1", 00:18:09.037 "core_mask": "0x2", 00:18:09.037 "workload": "randwrite", 00:18:09.037 "status": "finished", 00:18:09.037 "queue_depth": 128, 00:18:09.037 "io_size": 4096, 00:18:09.037 "runtime": 2.00564, 00:18:09.038 "iops": 17031.471251071976, 00:18:09.038 "mibps": 66.52918457449991, 00:18:09.038 "io_failed": 0, 00:18:09.038 "io_timeout": 0, 00:18:09.038 "avg_latency_us": 7508.418239622726, 00:18:09.038 "min_latency_us": 2263.970909090909, 00:18:09.038 "max_latency_us": 27286.807272727274 00:18:09.038 } 00:18:09.038 ], 00:18:09.038 "core_count": 1 00:18:09.038 } 00:18:09.038 14:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.038 14:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.038 14:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.038 | .driver_specific 00:18:09.038 | .nvme_error 00:18:09.038 | .status_code 00:18:09.038 | .command_transient_transport_error' 00:18:09.038 14:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 134 > 0 )) 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81046 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81046 ']' 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81046 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.297 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81046 00:18:09.556 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:09.556 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:09.556 killing process with pid 81046 00:18:09.556 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81046' 00:18:09.556 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.557 00:18:09.557 Latency(us) 00:18:09.557 [2024-12-10T14:23:34.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.557 [2024-12-10T14:23:34.394Z] =================================================================================================================== 00:18:09.557 [2024-12-10T14:23:34.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81046 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81046 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81093 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81093 /var/tmp/bperf.sock 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81093 ']' 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:09.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.557 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.557 [2024-12-10 14:23:34.333762] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:18:09.557 [2024-12-10 14:23:34.333862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81093 ] 00:18:09.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:09.557 Zero copy mechanism will not be used. 00:18:09.816 [2024-12-10 14:23:34.479730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.816 [2024-12-10 14:23:34.509232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.816 [2024-12-10 14:23:34.536943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:09.816 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.816 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:09.816 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:09.816 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.075 14:23:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.644 nvme0n1 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:10.644 14:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:10.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.644 Zero copy mechanism will not be used. 00:18:10.644 Running I/O for 2 seconds... 00:18:10.644 [2024-12-10 14:23:35.369589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.369708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.374375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.374473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.379290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.379384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.379407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.384100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.384180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.384201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.388807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.388887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.388907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.393491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.393593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.398194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.398278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.398298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.402888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.402986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.403007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.407714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.407796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.407817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.412626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.412705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.412726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.417647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.417728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.417748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.422513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.422593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.422613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.427321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.427391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.432245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.432328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.432349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.436980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.437058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.437078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.441613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.441691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.441711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.446365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.446444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.446464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.451072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.451197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.455963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.456054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.456074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.460622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.644 [2024-12-10 14:23:35.460701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-10 14:23:35.460721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.644 [2024-12-10 14:23:35.465380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.645 [2024-12-10 14:23:35.465456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-10 14:23:35.465476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.645 [2024-12-10 14:23:35.470141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.645 [2024-12-10 14:23:35.470219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-10 14:23:35.470239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.645 [2024-12-10 14:23:35.474751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.645 [2024-12-10 14:23:35.474828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-10 14:23:35.474847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.479957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.480052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.480089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.484921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.485010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.485029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.489669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.489748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.489768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.494375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.494452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.494472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.499178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.499249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.499271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.504033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.504122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.508822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.508899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.508919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.513676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.513755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.513774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.518478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.518554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.518574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.523200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.523265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.523286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.528052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.528145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.905 [2024-12-10 14:23:35.528165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.905 [2024-12-10 14:23:35.532804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.905 [2024-12-10 14:23:35.532881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.532901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.537488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.537567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.537587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.542586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.542668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.542689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.547682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.547764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.552425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.552502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.552521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.557138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.557227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.557246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.561839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.561920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.561940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.566577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.566661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.566681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.571433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.571538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.571559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.576235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.576315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.581041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.581120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.581140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.585691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.585768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.585787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.590496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.590573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.590593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.595289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.595356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.595377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.600108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.600206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.605140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.605254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.610314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.610402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.615762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.615847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.615868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.621401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.621501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.626736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.626815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.632177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.632249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.632271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.637396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.637497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.642480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.642556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.647691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.647771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.647792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.652465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.652561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.906 [2024-12-10 14:23:35.657263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.906 [2024-12-10 14:23:35.657343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.906 [2024-12-10 14:23:35.657362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.662015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.662092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.662112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.666736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.666815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.666834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.671677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.671765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.671785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.676478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.676555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.676574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.681273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.681352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.681371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.686185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.686274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.686294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.691018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.691082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.691102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.695918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.696017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.696037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.700618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.700695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.700716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.705340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.705417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.705436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.710107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.710184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.710204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.714776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.714854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.714874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.719655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.719736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.724429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.724504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.724524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.729244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.729320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.729340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.733947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.734035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.734055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.907 [2024-12-10 14:23:35.738877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:10.907 [2024-12-10 14:23:35.738956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.907 [2024-12-10 14:23:35.738975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.743830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.743924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.743944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.748763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.748840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.748860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.753616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.753695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.753715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.758409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.758486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.758505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.763231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.763299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.763321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.768132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.768230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.772783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.772860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.772880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.777514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.777593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.782344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.782420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.782439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.787042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.787103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.787164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.791982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.792072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.792104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.796692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.796772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.801404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.168 [2024-12-10 14:23:35.801481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.168 [2024-12-10 14:23:35.801501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.168 [2024-12-10 14:23:35.806098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.806177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.810825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.810901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.810921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.815612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.815714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.820350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.820428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.820448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.825071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.825148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.825167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.829807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.829904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.834606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.834683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.834703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.839365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.839432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.839464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.844206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.844271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.844290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.848936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.849026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.849046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.853578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.853656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.853675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.858356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.858435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.858455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.863176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.863246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.863267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.868046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.868125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.868145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.873022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.873122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.877712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.877797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.877817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.882556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.882634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.882654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.887338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.887405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.887426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.892227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.892291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.897008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.897095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.897114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.901898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.902008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.902042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.907006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.907084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.907114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.912299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.912411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.912431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.917702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.917789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.917826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.169 [2024-12-10 14:23:35.923083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.169 [2024-12-10 14:23:35.923195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.169 [2024-12-10 14:23:35.923218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.928518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.928594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.928614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.933830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.938846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.938948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.944143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.944204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.944225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.949331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.949454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.954301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.954380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.954401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.959316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.959392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.959414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.964541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.964623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.964644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.969511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.969592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.969612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.974372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.974450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.974470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.979371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.979479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.979526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.984332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.984413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.984434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.989239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.989319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.989339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.994180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.994260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.994281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.170 [2024-12-10 14:23:35.998988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.170 [2024-12-10 14:23:35.999072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.170 [2024-12-10 14:23:35.999093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.004276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.004352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.004373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.009540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.009622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.009643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.014423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.014501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.014522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.019331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.019405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.019427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.024447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.024526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.024547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.029364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.029444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.029464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.034196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.034257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.034276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.039376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.039496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.044296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.044374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.044394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.049554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.049637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.049659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.055266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.055336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.055359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.060783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.060885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.060908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.066372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.066461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.066481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.072080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.072162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.431 [2024-12-10 14:23:36.072183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.431 [2024-12-10 14:23:36.077471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.431 [2024-12-10 14:23:36.077575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.077598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.083167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.083265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.088687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.088754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.088777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.094346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.094427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.094448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.100099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.100212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.100233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.104999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.105079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.105099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.109864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.109943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.109963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.115267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.115340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.115363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.120582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.120650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.126302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.126409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.131717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.131838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.137501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.137605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.137627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.142935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.143054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.148698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.148785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.148823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.154293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.154396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.159815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.159913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.159936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.164944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.165048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.165084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.170050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.170129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.170150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.175632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.175712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.180717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.180799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.180821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.185771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.185867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.190692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.190822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.190844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.195886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.195980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.196001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.200764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.200844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.200865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.205651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.205731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.432 [2024-12-10 14:23:36.205751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.432 [2024-12-10 14:23:36.210681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.432 [2024-12-10 14:23:36.210763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.210783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.215636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.215726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.215746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.220661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.220728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.225919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.226011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.226033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.230753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.230832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.230852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.235666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.235746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.235766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.240654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.240729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.240753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.245660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.245741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.245761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.250571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.250651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.250672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.255553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.255634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.255656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.433 [2024-12-10 14:23:36.260816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.433 [2024-12-10 14:23:36.260915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.433 [2024-12-10 14:23:36.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.266176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.266261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.266282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.271851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.271922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.271945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.277569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.277653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.277674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.283056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.283170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.283193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.288152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.288216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.288237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.293338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.293418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.293440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.298164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.298246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.298266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.303078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.303202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.303224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.308084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.308164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.308185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.313045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.313126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.313146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.317863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.317928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.317947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.323031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.323188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.323210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.328246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.328324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.328344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.333287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.333403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.338093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.338193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.342789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.342868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.694 [2024-12-10 14:23:36.342888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.694 [2024-12-10 14:23:36.347647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.694 [2024-12-10 14:23:36.347726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.352430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.352547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.357177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.357253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.357273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 6188.00 IOPS, 773.50 MiB/s [2024-12-10T14:23:36.532Z] [2024-12-10 14:23:36.362688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.362877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.362896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.367750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.367851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.372447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.372540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.372561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.377281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.377367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.382211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.382277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.382297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.387038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.387178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.391910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.392001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.392021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.396644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.396726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.396746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.401518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.401595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.401615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.406222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.406301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.406320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.410984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.411046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.411066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.415746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.415823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.415843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.420413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.420507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.420527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.425217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.425311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.425330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.429931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.430043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.430062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.434742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.434822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.434841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.439517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.439611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.439630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.444301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.444380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.449098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.449177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.449197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.453751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.453828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.453847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.458551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.458631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.458651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.463429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.463537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.695 [2024-12-10 14:23:36.463557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.695 [2024-12-10 14:23:36.468314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.695 [2024-12-10 14:23:36.468430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.468466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.473222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.473301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.473322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.477949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.478041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.478060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.482679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.482759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.482778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.487529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.487613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.487633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.492579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.492659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.497381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.497457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.497476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.502185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.502262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.506807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.506886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.506906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.511703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.511783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.511803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.516525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.516604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.516625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.521269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.521346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.521366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.696 [2024-12-10 14:23:36.526233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.696 [2024-12-10 14:23:36.526301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.696 [2024-12-10 14:23:36.526322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.531385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.531486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.531520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.536499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.536579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.536599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.541296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.541377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.541396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.545919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.546009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.546029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.550645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.550723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.550742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.555454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.555580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.555599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.560241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.560317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.565090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.565170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.565190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.569784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.574685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.574763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.579586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.579662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.579682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.584451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.584546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.584566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.589256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.589333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.589353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.594055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.594132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.594152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.598718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.598797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.598817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.603577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.603654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.603673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.608286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.608366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.608385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.613119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.613197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.613217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.617729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.617836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.622567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.622646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.622666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.627710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.627789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.627809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.632931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.633035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.633067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.638484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.638572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.638595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.957 [2024-12-10 14:23:36.644173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.957 [2024-12-10 14:23:36.644246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.957 [2024-12-10 14:23:36.644268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.650230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.650301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.650325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.656173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.656250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.656273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.662402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.662499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.662548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.668439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.668542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.668565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.673874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.674034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.674057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.679422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.679538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.679562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.684948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.685043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.685065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.690040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.690117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.690138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.694997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.695076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.695097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.699920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.700007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.700027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.705595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.705670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.705693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.711214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.711290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.711313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.716617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.716707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.716730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.722009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.722084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.722107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.727280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.727354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.727377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.732735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.732822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.732858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.738281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.738375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.743647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.743722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.743745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.749021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.749117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.749139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.754439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.754541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.754581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.759645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.759725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.759746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.764720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.764802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.764838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.769616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.769697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.769717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.774619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.774697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.774717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.779639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.958 [2024-12-10 14:23:36.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.958 [2024-12-10 14:23:36.779737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:11.958 [2024-12-10 14:23:36.784759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.959 [2024-12-10 14:23:36.784844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-12-10 14:23:36.784865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.959 [2024-12-10 14:23:36.789851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:11.959 [2024-12-10 14:23:36.789934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-12-10 14:23:36.789955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.794863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.794976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.799830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.799911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.799930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.805062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.805151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.805175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.810359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.810431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.810454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.815788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.815901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.815921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.820997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.821097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.825905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.826005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.826026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.830945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.831071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.836282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.836357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.836380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.841613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.219 [2024-12-10 14:23:36.841688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.219 [2024-12-10 14:23:36.841710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.219 [2024-12-10 14:23:36.846954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.847074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.851884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.851976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.851996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.856782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.856862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.856897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.861642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.861721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.861740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.866524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.866603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.866622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.871342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.871422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.871459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.876273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.876351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.876370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.881113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.881194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.881213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.885900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.885990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.886012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.890700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.890779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.890799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.895588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.895666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.895686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.900467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.900561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.900581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.905271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.905350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.905370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.910016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.910108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.910128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.914698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.914778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.914798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.919555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.919632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.919652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.924378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.924456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.929227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.929305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.929324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.933989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.934070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.934089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.938673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.938751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.938770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.943558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.943634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.943654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.948382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.948501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.953166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.953243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.953263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.957871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.957950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.957981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.962564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.962640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-12-10 14:23:36.962660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.220 [2024-12-10 14:23:36.967397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.220 [2024-12-10 14:23:36.967476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.967511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.972252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.972317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.972337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.977126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.977191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.977210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.981887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.981978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.982011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.986695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.986776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.986796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.991702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.991779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.991800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:36.996456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:36.996554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:36.996574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.001285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.001382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.001402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.006026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.006086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.010805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.010884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.010905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.015629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.015705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.015725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.020397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.020474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.020510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.025182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.025274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.025295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.029943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.030035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.030054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.034653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.034732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.034752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.039735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.039814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.039834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.044417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.044510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.044530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.221 [2024-12-10 14:23:37.049170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.221 [2024-12-10 14:23:37.049250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-12-10 14:23:37.049271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.054183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.054277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.054298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.059046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.059186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.059208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.064079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.064142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.064164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.068954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.069046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.069066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.074246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.074334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.074354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.079603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.079685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.079707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.085062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.085156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.090863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.090980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.091002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.096472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.096594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.096623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.102291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.102362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.102385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.108054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.108144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.108166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.113900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.114021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.114043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.119255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.119325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.119348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.124507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.124587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.124607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.129686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.129766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.134814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.134908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.139838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.139920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.139940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.144753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.144851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.149758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.149839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.149859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.154640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.154722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-12-10 14:23:37.154743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.482 [2024-12-10 14:23:37.159650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.482 [2024-12-10 14:23:37.159732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.159752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.164666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.164744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.169621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.169703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.174458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.174557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.174577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.179723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.179824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.184639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.184717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.184737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.189556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.189637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.189657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.194684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.194762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.194783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.199854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.199935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.199956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.204620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.204699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.204719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.209658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.209735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.209755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.214613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.214693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.214713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.219506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.219600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.219620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.224586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.224665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.224685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.229497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.229577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.229597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.234355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.234437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.234457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.239252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.239318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.244214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.244295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.244315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.249086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.249166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.249186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.253860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.253919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.253939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.258777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.258857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.258877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.263716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.263795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.263816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.268843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.268920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.268939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.273734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.273814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.273833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.278705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.278787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.278807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.483 [2024-12-10 14:23:37.283583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.483 [2024-12-10 14:23:37.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-12-10 14:23:37.283684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.288354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.288432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.288451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.293096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.293159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.297919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.298010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.298030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.302611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.302689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.302710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.307598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.307698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.307718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.484 [2024-12-10 14:23:37.312480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.484 [2024-12-10 14:23:37.312575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.484 [2024-12-10 14:23:37.312595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.317511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.317605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.317626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.322495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.322576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.322597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.327658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.327740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.327761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.332944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.333043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.333064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.337871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.337950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.337971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.343196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.343270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.343293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.348516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.348587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.348608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.353957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.743 [2024-12-10 14:23:37.354047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-12-10 14:23:37.354079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:12.743 [2024-12-10 14:23:37.359408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1416d30) with pdu=0x200016eff3c8 00:18:12.744 [2024-12-10 14:23:37.359504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-12-10 14:23:37.359527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.744 6185.50 IOPS, 773.19 MiB/s 00:18:12.744 Latency(us) 00:18:12.744 [2024-12-10T14:23:37.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.744 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:12.744 nvme0n1 : 2.00 6183.75 772.97 0.00 0.00 2581.62 2100.13 7298.33 00:18:12.744 [2024-12-10T14:23:37.581Z] =================================================================================================================== 00:18:12.744 [2024-12-10T14:23:37.581Z] Total : 6183.75 772.97 0.00 0.00 2581.62 2100.13 7298.33 00:18:12.744 { 00:18:12.744 "results": [ 00:18:12.744 { 00:18:12.744 "job": "nvme0n1", 00:18:12.744 "core_mask": "0x2", 00:18:12.744 "workload": "randwrite", 00:18:12.744 "status": "finished", 00:18:12.744 "queue_depth": 16, 00:18:12.744 "io_size": 131072, 00:18:12.744 "runtime": 2.002992, 00:18:12.744 "iops": 6183.749111329451, 00:18:12.744 "mibps": 772.9686389161814, 00:18:12.744 "io_failed": 0, 00:18:12.744 "io_timeout": 0, 00:18:12.744 "avg_latency_us": 2581.619397853882, 00:18:12.744 "min_latency_us": 2100.130909090909, 00:18:12.744 "max_latency_us": 7298.327272727272 00:18:12.744 } 00:18:12.744 ], 00:18:12.744 "core_count": 1 00:18:12.744 } 00:18:12.744 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:12.744 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:12.744 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:12.744 | .driver_specific 00:18:12.744 | .nvme_error 00:18:12.744 | .status_code 00:18:12.744 | .command_transient_transport_error' 00:18:12.744 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81093 ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.003 killing process with pid 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81093' 00:18:13.003 Received shutdown signal, test time was about 2.000000 seconds 00:18:13.003 00:18:13.003 Latency(us) 00:18:13.003 [2024-12-10T14:23:37.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.003 [2024-12-10T14:23:37.840Z] =================================================================================================================== 00:18:13.003 [2024-12-10T14:23:37.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81093 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80914 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80914 ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80914 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80914 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.003 killing process with pid 80914 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80914' 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80914 00:18:13.003 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80914 00:18:13.262 00:18:13.262 real 0m15.376s 00:18:13.262 user 0m29.513s 00:18:13.262 sys 0m4.356s 00:18:13.262 ************************************ 00:18:13.262 END TEST nvmf_digest_error 00:18:13.262 ************************************ 00:18:13.262 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.262 14:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.262 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.262 rmmod nvme_tcp 00:18:13.521 rmmod nvme_fabrics 00:18:13.521 rmmod nvme_keyring 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80914 ']' 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80914 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80914 ']' 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80914 00:18:13.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80914) - No such process 00:18:13.521 Process with pid 80914 is not found 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80914 is not found' 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.521 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.522 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:13.781 ************************************ 00:18:13.781 END TEST nvmf_digest 00:18:13.781 ************************************ 00:18:13.781 00:18:13.781 real 0m31.137s 00:18:13.781 user 0m58.209s 00:18:13.781 sys 0m9.171s 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.781 ************************************ 00:18:13.781 START TEST nvmf_host_multipath 00:18:13.781 ************************************ 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:13.781 * Looking for test storage... 00:18:13.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.781 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:14.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.042 --rc genhtml_branch_coverage=1 00:18:14.042 --rc genhtml_function_coverage=1 00:18:14.042 --rc genhtml_legend=1 00:18:14.042 --rc geninfo_all_blocks=1 00:18:14.042 --rc geninfo_unexecuted_blocks=1 00:18:14.042 00:18:14.042 ' 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:14.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.042 --rc genhtml_branch_coverage=1 00:18:14.042 --rc genhtml_function_coverage=1 00:18:14.042 --rc genhtml_legend=1 00:18:14.042 --rc geninfo_all_blocks=1 00:18:14.042 --rc geninfo_unexecuted_blocks=1 00:18:14.042 00:18:14.042 ' 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:14.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.042 --rc genhtml_branch_coverage=1 00:18:14.042 --rc genhtml_function_coverage=1 00:18:14.042 --rc genhtml_legend=1 00:18:14.042 --rc geninfo_all_blocks=1 00:18:14.042 --rc geninfo_unexecuted_blocks=1 00:18:14.042 00:18:14.042 ' 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:14.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.042 --rc genhtml_branch_coverage=1 00:18:14.042 --rc genhtml_function_coverage=1 00:18:14.042 --rc genhtml_legend=1 00:18:14.042 --rc geninfo_all_blocks=1 00:18:14.042 --rc geninfo_unexecuted_blocks=1 00:18:14.042 00:18:14.042 ' 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.042 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.043 Cannot find device "nvmf_init_br" 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.043 Cannot find device "nvmf_init_br2" 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.043 Cannot find device "nvmf_tgt_br" 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.043 Cannot find device "nvmf_tgt_br2" 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.043 Cannot find device "nvmf_init_br" 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:14.043 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.043 Cannot find device "nvmf_init_br2" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.044 Cannot find device "nvmf_tgt_br" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.044 Cannot find device "nvmf_tgt_br2" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.044 Cannot find device "nvmf_br" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.044 Cannot find device "nvmf_init_if" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.044 Cannot find device "nvmf_init_if2" 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.044 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.304 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:14.304 00:18:14.304 --- 10.0.0.3 ping statistics --- 00:18:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.304 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:18:14.304 00:18:14.304 --- 10.0.0.4 ping statistics --- 00:18:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.304 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:14.304 00:18:14.304 --- 10.0.0.1 ping statistics --- 00:18:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.304 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:14.304 00:18:14.304 --- 10.0.0.2 ping statistics --- 00:18:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.304 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81401 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81401 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81401 ']' 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.304 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.304 [2024-12-10 14:23:39.124692] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:18:14.304 [2024-12-10 14:23:39.124784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.564 [2024-12-10 14:23:39.272969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.564 [2024-12-10 14:23:39.301188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.564 [2024-12-10 14:23:39.301251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.564 [2024-12-10 14:23:39.301260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.564 [2024-12-10 14:23:39.301266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.564 [2024-12-10 14:23:39.301272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.564 [2024-12-10 14:23:39.302017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.564 [2024-12-10 14:23:39.302025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.564 [2024-12-10 14:23:39.330278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.564 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.564 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:14.564 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.564 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.564 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.823 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.823 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81401 00:18:14.823 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.082 [2024-12-10 14:23:39.681474] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.082 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:15.341 Malloc0 00:18:15.341 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:15.600 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.859 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:15.859 [2024-12-10 14:23:40.684698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.118 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:16.377 [2024-12-10 14:23:40.976881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81449 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81449 /var/tmp/bdevperf.sock 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81449 ']' 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.377 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.314 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.314 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:17.314 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:17.573 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:17.832 Nvme0n1 00:18:17.832 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:18.091 Nvme0n1 00:18:18.091 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:18.091 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.469 14:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:19.469 14:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:19.469 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:19.728 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:19.729 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81494 00:18:19.729 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:19.729 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.298 Attaching 4 probes... 00:18:26.298 @path[10.0.0.3, 4421]: 19408 00:18:26.298 @path[10.0.0.3, 4421]: 20036 00:18:26.298 @path[10.0.0.3, 4421]: 19896 00:18:26.298 @path[10.0.0.3, 4421]: 19711 00:18:26.298 @path[10.0.0.3, 4421]: 19551 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.298 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81494 00:18:26.299 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.299 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:26.299 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:26.299 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:26.557 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:26.557 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81608 00:18:26.557 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:26.557 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:33.121 Attaching 4 probes... 00:18:33.121 @path[10.0.0.3, 4420]: 19591 00:18:33.121 @path[10.0.0.3, 4420]: 19690 00:18:33.121 @path[10.0.0.3, 4420]: 19905 00:18:33.121 @path[10.0.0.3, 4420]: 19280 00:18:33.121 @path[10.0.0.3, 4420]: 19432 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81608 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:33.121 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:33.380 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:33.380 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81726 00:18:33.380 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:33.380 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.945 Attaching 4 probes... 00:18:39.945 @path[10.0.0.3, 4421]: 15680 00:18:39.945 @path[10.0.0.3, 4421]: 19462 00:18:39.945 @path[10.0.0.3, 4421]: 19256 00:18:39.945 @path[10.0.0.3, 4421]: 18496 00:18:39.945 @path[10.0.0.3, 4421]: 19056 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81726 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:39.945 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:40.204 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:40.462 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:40.462 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:40.462 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81838 00:18:40.462 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.027 Attaching 4 probes... 00:18:47.027 00:18:47.027 00:18:47.027 00:18:47.027 00:18:47.027 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81838 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:47.027 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:47.286 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:47.286 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81956 00:18:47.286 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:47.286 14:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:53.864 14:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:53.864 14:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.864 Attaching 4 probes... 00:18:53.864 @path[10.0.0.3, 4421]: 19037 00:18:53.864 @path[10.0.0.3, 4421]: 19360 00:18:53.864 @path[10.0.0.3, 4421]: 19264 00:18:53.864 @path[10.0.0.3, 4421]: 19255 00:18:53.864 @path[10.0.0.3, 4421]: 19452 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81956 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:53.864 14:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:54.801 14:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:54.801 14:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82074 00:18:54.801 14:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:54.801 14:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:01.366 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:01.366 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:01.366 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.367 Attaching 4 probes... 00:19:01.367 @path[10.0.0.3, 4420]: 17151 00:19:01.367 @path[10.0.0.3, 4420]: 17698 00:19:01.367 @path[10.0.0.3, 4420]: 18714 00:19:01.367 @path[10.0.0.3, 4420]: 19290 00:19:01.367 @path[10.0.0.3, 4420]: 19293 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82074 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.367 14:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:01.367 [2024-12-10 14:24:26.087639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:01.367 14:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:01.625 14:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:08.191 14:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:08.191 14:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82254 00:19:08.191 14:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:08.191 14:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81401 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.765 Attaching 4 probes... 00:19:14.765 @path[10.0.0.3, 4421]: 18875 00:19:14.765 @path[10.0.0.3, 4421]: 19184 00:19:14.765 @path[10.0.0.3, 4421]: 18382 00:19:14.765 @path[10.0.0.3, 4421]: 18643 00:19:14.765 @path[10.0.0.3, 4421]: 19114 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82254 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81449 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81449 ']' 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81449 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81449 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.765 killing process with pid 81449 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81449' 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81449 00:19:14.765 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81449 00:19:14.765 { 00:19:14.765 "results": [ 00:19:14.765 { 00:19:14.765 "job": "Nvme0n1", 00:19:14.765 "core_mask": "0x4", 00:19:14.765 "workload": "verify", 00:19:14.765 "status": "terminated", 00:19:14.765 "verify_range": { 00:19:14.765 "start": 0, 00:19:14.765 "length": 16384 00:19:14.765 }, 00:19:14.765 "queue_depth": 128, 00:19:14.765 "io_size": 4096, 00:19:14.765 "runtime": 55.738455, 00:19:14.765 "iops": 8120.5157193539, 00:19:14.765 "mibps": 31.72076452872617, 00:19:14.765 "io_failed": 0, 00:19:14.765 "io_timeout": 0, 00:19:14.765 "avg_latency_us": 15733.598143677034, 00:19:14.765 "min_latency_us": 722.3854545454545, 00:19:14.765 "max_latency_us": 7046430.72 00:19:14.765 } 00:19:14.765 ], 00:19:14.765 "core_count": 1 00:19:14.765 } 00:19:14.766 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81449 00:19:14.766 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.766 [2024-12-10 14:23:41.051139] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:19:14.766 [2024-12-10 14:23:41.051263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81449 ] 00:19:14.766 [2024-12-10 14:23:41.198561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.766 [2024-12-10 14:23:41.228873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.766 [2024-12-10 14:23:41.257662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.766 Running I/O for 90 seconds... 00:19:14.766 7572.00 IOPS, 29.58 MiB/s [2024-12-10T14:24:39.603Z] 8500.50 IOPS, 33.21 MiB/s [2024-12-10T14:24:39.603Z] 8981.67 IOPS, 35.08 MiB/s [2024-12-10T14:24:39.603Z] 9242.25 IOPS, 36.10 MiB/s [2024-12-10T14:24:39.603Z] 9382.60 IOPS, 36.65 MiB/s [2024-12-10T14:24:39.603Z] 9461.50 IOPS, 36.96 MiB/s [2024-12-10T14:24:39.603Z] 9505.29 IOPS, 37.13 MiB/s [2024-12-10T14:24:39.603Z] 9517.12 IOPS, 37.18 MiB/s [2024-12-10T14:24:39.603Z] [2024-12-10 14:23:51.265977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.266611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.266961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.266987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.267022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.267053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.267085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.267143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.766 [2024-12-10 14:23:51.267196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.267239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.766 [2024-12-10 14:23:51.267304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.766 [2024-12-10 14:23:51.267320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.267355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.267390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.267425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.267474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.267522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.267973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.267992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.268006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.268051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.268085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.767 [2024-12-10 14:23:51.268627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.268660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.767 [2024-12-10 14:23:51.268692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.767 [2024-12-10 14:23:51.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.268890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.268928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.268947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.268988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.269515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.269968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.269982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.270001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.270025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.270048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.270063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.271389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.768 [2024-12-10 14:23:51.271421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.271465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.271496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.768 [2024-12-10 14:23:51.271532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.768 [2024-12-10 14:23:51.271546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.271971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.271992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.272038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.272112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.272146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:51.272183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:51.272198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.769 9514.00 IOPS, 37.16 MiB/s [2024-12-10T14:24:39.606Z] 9549.80 IOPS, 37.30 MiB/s [2024-12-10T14:24:39.606Z] 9584.91 IOPS, 37.44 MiB/s [2024-12-10T14:24:39.606Z] 9610.17 IOPS, 37.54 MiB/s [2024-12-10T14:24:39.606Z] 9606.92 IOPS, 37.53 MiB/s [2024-12-10T14:24:39.606Z] 9628.14 IOPS, 37.61 MiB/s [2024-12-10T14:24:39.606Z] [2024-12-10 14:23:57.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.888989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.769 [2024-12-10 14:23:57.889303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.769 [2024-12-10 14:23:57.889556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.769 [2024-12-10 14:23:57.889570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.889604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.889980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.889998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.770 [2024-12-10 14:23:57.890733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.770 [2024-12-10 14:23:57.890770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.770 [2024-12-10 14:23:57.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.890961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.890974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.771 [2024-12-10 14:23:57.891908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.891944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.891985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.892020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.892066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.892120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.892153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.771 [2024-12-10 14:23:57.892186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.771 [2024-12-10 14:23:57.892205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.892520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.892753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.772 [2024-12-10 14:23:57.893497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.893934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.893988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.894074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.894123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.894166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:23:57.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:23:57.894221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.772 9534.53 IOPS, 37.24 MiB/s [2024-12-10T14:24:39.609Z] 9019.19 IOPS, 35.23 MiB/s [2024-12-10T14:24:39.609Z] 9069.76 IOPS, 35.43 MiB/s [2024-12-10T14:24:39.609Z] 9100.50 IOPS, 35.55 MiB/s [2024-12-10T14:24:39.609Z] 9127.63 IOPS, 35.65 MiB/s [2024-12-10T14:24:39.609Z] 9136.05 IOPS, 35.69 MiB/s [2024-12-10T14:24:39.609Z] 9157.38 IOPS, 35.77 MiB/s [2024-12-10T14:24:39.609Z] 9173.86 IOPS, 35.84 MiB/s [2024-12-10T14:24:39.609Z] [2024-12-10 14:24:05.068746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:24:05.068798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:24:05.068865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:24:05.068884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:24:05.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:24:05.068920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:24:05.068939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:24:05.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.772 [2024-12-10 14:24:05.068983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.772 [2024-12-10 14:24:05.069000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.069382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.069965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.069989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.773 [2024-12-10 14:24:05.070205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.773 [2024-12-10 14:24:05.070639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.773 [2024-12-10 14:24:05.070652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.070695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.070731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.070766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.070804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.070838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.070871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.070904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.070940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.774 [2024-12-10 14:24:05.071448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.072008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.072033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.072049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.774 [2024-12-10 14:24:05.072073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.774 [2024-12-10 14:24:05.072104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.072954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.072969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.775 [2024-12-10 14:24:05.073313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.775 [2024-12-10 14:24:05.073618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.775 [2024-12-10 14:24:05.073633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:05.073923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:05.073937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.776 8790.65 IOPS, 34.34 MiB/s [2024-12-10T14:24:39.613Z] 8424.38 IOPS, 32.91 MiB/s [2024-12-10T14:24:39.613Z] 8087.40 IOPS, 31.59 MiB/s [2024-12-10T14:24:39.613Z] 7776.35 IOPS, 30.38 MiB/s [2024-12-10T14:24:39.613Z] 7488.33 IOPS, 29.25 MiB/s [2024-12-10T14:24:39.613Z] 7220.89 IOPS, 28.21 MiB/s [2024-12-10T14:24:39.613Z] 6971.90 IOPS, 27.23 MiB/s [2024-12-10T14:24:39.613Z] 7040.13 IOPS, 27.50 MiB/s [2024-12-10T14:24:39.613Z] 7124.48 IOPS, 27.83 MiB/s [2024-12-10T14:24:39.613Z] 7204.34 IOPS, 28.14 MiB/s [2024-12-10T14:24:39.613Z] 7277.91 IOPS, 28.43 MiB/s [2024-12-10T14:24:39.613Z] 7348.56 IOPS, 28.71 MiB/s [2024-12-10T14:24:39.613Z] 7402.14 IOPS, 28.91 MiB/s [2024-12-10T14:24:39.613Z] [2024-12-10 14:24:18.491895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.491955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.491981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.776 [2024-12-10 14:24:18.492725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.776 [2024-12-10 14:24:18.492881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.776 [2024-12-10 14:24:18.492894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.492906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.492919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.492931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.492944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.492956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-10 14:24:18.493883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.493978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.493990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.494003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.494024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.494039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.494051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.494065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.777 [2024-12-10 14:24:18.494076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.777 [2024-12-10 14:24:18.494090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-10 14:24:18.494803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.494977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.494998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.495018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.495034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.495046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.495059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.778 [2024-12-10 14:24:18.495071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.778 [2024-12-10 14:24:18.495084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.779 [2024-12-10 14:24:18.495269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-10 14:24:18.495740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.495791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.779 [2024-12-10 14:24:18.495805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.779 [2024-12-10 14:24:18.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8656 len:8 PRP1 0x0 PRP2 0x0 00:19:14.779 [2024-12-10 14:24:18.495835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.779 [2024-12-10 14:24:18.496884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:14.779 [2024-12-10 14:24:18.496979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b1e0 (9): Bad file descriptor 00:19:14.779 [2024-12-10 14:24:18.497366] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.779 [2024-12-10 14:24:18.497410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189b1e0 with addr=10.0.0.3, port=4421 00:19:14.779 [2024-12-10 14:24:18.497426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b1e0 is same with the state(6) to be set 00:19:14.779 [2024-12-10 14:24:18.497583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b1e0 (9): Bad file descriptor 00:19:14.779 [2024-12-10 14:24:18.497640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:14.779 [2024-12-10 14:24:18.497661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:14.779 [2024-12-10 14:24:18.497676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:14.779 [2024-12-10 14:24:18.497688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:14.779 [2024-12-10 14:24:18.497702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:14.779 7451.44 IOPS, 29.11 MiB/s [2024-12-10T14:24:39.616Z] 7479.24 IOPS, 29.22 MiB/s [2024-12-10T14:24:39.616Z] 7508.95 IOPS, 29.33 MiB/s [2024-12-10T14:24:39.616Z] 7542.46 IOPS, 29.46 MiB/s [2024-12-10T14:24:39.616Z] 7588.30 IOPS, 29.64 MiB/s [2024-12-10T14:24:39.616Z] 7638.15 IOPS, 29.84 MiB/s [2024-12-10T14:24:39.616Z] 7686.19 IOPS, 30.02 MiB/s [2024-12-10T14:24:39.616Z] 7722.33 IOPS, 30.17 MiB/s [2024-12-10T14:24:39.616Z] 7760.64 IOPS, 30.31 MiB/s [2024-12-10T14:24:39.616Z] 7802.58 IOPS, 30.48 MiB/s [2024-12-10T14:24:39.616Z] [2024-12-10 14:24:28.562224] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:14.779 7840.20 IOPS, 30.63 MiB/s [2024-12-10T14:24:39.616Z] 7877.64 IOPS, 30.77 MiB/s [2024-12-10T14:24:39.616Z] 7914.02 IOPS, 30.91 MiB/s [2024-12-10T14:24:39.616Z] 7945.65 IOPS, 31.04 MiB/s [2024-12-10T14:24:39.616Z] 7971.22 IOPS, 31.14 MiB/s [2024-12-10T14:24:39.616Z] 8003.47 IOPS, 31.26 MiB/s [2024-12-10T14:24:39.616Z] 8033.25 IOPS, 31.38 MiB/s [2024-12-10T14:24:39.616Z] 8055.57 IOPS, 31.47 MiB/s [2024-12-10T14:24:39.616Z] 8078.98 IOPS, 31.56 MiB/s [2024-12-10T14:24:39.616Z] 8105.62 IOPS, 31.66 MiB/s [2024-12-10T14:24:39.616Z] Received shutdown signal, test time was about 55.739314 seconds 00:19:14.779 00:19:14.779 Latency(us) 00:19:14.779 [2024-12-10T14:24:39.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.779 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.779 Verification LBA range: start 0x0 length 0x4000 00:19:14.779 Nvme0n1 : 55.74 8120.52 31.72 0.00 0.00 15733.60 722.39 7046430.72 00:19:14.779 [2024-12-10T14:24:39.616Z] =================================================================================================================== 00:19:14.779 [2024-12-10T14:24:39.616Z] Total : 8120.52 31.72 0.00 0.00 15733.60 722.39 7046430.72 00:19:14.779 14:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.779 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.779 rmmod nvme_tcp 00:19:14.779 rmmod nvme_fabrics 00:19:14.779 rmmod nvme_keyring 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81401 ']' 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81401 ']' 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.780 killing process with pid 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81401' 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81401 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:14.780 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:15.039 00:19:15.039 real 1m1.276s 00:19:15.039 user 2m50.532s 00:19:15.039 sys 0m18.046s 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.039 ************************************ 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:15.039 END TEST nvmf_host_multipath 00:19:15.039 ************************************ 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.039 ************************************ 00:19:15.039 START TEST nvmf_timeout 00:19:15.039 ************************************ 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:15.039 * Looking for test storage... 00:19:15.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:15.039 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.299 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.300 --rc genhtml_branch_coverage=1 00:19:15.300 --rc genhtml_function_coverage=1 00:19:15.300 --rc genhtml_legend=1 00:19:15.300 --rc geninfo_all_blocks=1 00:19:15.300 --rc geninfo_unexecuted_blocks=1 00:19:15.300 00:19:15.300 ' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.300 --rc genhtml_branch_coverage=1 00:19:15.300 --rc genhtml_function_coverage=1 00:19:15.300 --rc genhtml_legend=1 00:19:15.300 --rc geninfo_all_blocks=1 00:19:15.300 --rc geninfo_unexecuted_blocks=1 00:19:15.300 00:19:15.300 ' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.300 --rc genhtml_branch_coverage=1 00:19:15.300 --rc genhtml_function_coverage=1 00:19:15.300 --rc genhtml_legend=1 00:19:15.300 --rc geninfo_all_blocks=1 00:19:15.300 --rc geninfo_unexecuted_blocks=1 00:19:15.300 00:19:15.300 ' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.300 --rc genhtml_branch_coverage=1 00:19:15.300 --rc genhtml_function_coverage=1 00:19:15.300 --rc genhtml_legend=1 00:19:15.300 --rc geninfo_all_blocks=1 00:19:15.300 --rc geninfo_unexecuted_blocks=1 00:19:15.300 00:19:15.300 ' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.300 14:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.300 Cannot find device "nvmf_init_br" 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.300 Cannot find device "nvmf_init_br2" 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:15.300 Cannot find device "nvmf_tgt_br" 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.300 Cannot find device "nvmf_tgt_br2" 00:19:15.300 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:15.301 Cannot find device "nvmf_init_br" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:15.301 Cannot find device "nvmf_init_br2" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:15.301 Cannot find device "nvmf_tgt_br" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:15.301 Cannot find device "nvmf_tgt_br2" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:15.301 Cannot find device "nvmf_br" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:15.301 Cannot find device "nvmf_init_if" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:15.301 Cannot find device "nvmf_init_if2" 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.301 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:15.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:19:15.560 00:19:15.560 --- 10.0.0.3 ping statistics --- 00:19:15.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.560 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:15.560 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:15.560 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:19:15.560 00:19:15.560 --- 10.0.0.4 ping statistics --- 00:19:15.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.560 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:15.560 00:19:15.560 --- 10.0.0.1 ping statistics --- 00:19:15.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.560 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:15.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:19:15.560 00:19:15.560 --- 10.0.0.2 ping statistics --- 00:19:15.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.560 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:15.560 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82616 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82616 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82616 ']' 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.821 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.821 [2024-12-10 14:24:40.463673] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:19:15.821 [2024-12-10 14:24:40.463761] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.821 [2024-12-10 14:24:40.611313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:15.821 [2024-12-10 14:24:40.639595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.821 [2024-12-10 14:24:40.639658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.821 [2024-12-10 14:24:40.639668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.821 [2024-12-10 14:24:40.639675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.821 [2024-12-10 14:24:40.639681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.821 [2024-12-10 14:24:40.640432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.821 [2024-12-10 14:24:40.640441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.079 [2024-12-10 14:24:40.669351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.079 14:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:16.338 [2024-12-10 14:24:41.044455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.338 14:24:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:16.597 Malloc0 00:19:16.597 14:24:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.855 14:24:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.114 14:24:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:17.372 [2024-12-10 14:24:42.051764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82652 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82652 /var/tmp/bdevperf.sock 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82652 ']' 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.372 14:24:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.372 [2024-12-10 14:24:42.116063] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:19:17.372 [2024-12-10 14:24:42.116151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82652 ] 00:19:17.631 [2024-12-10 14:24:42.262423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.631 [2024-12-10 14:24:42.300906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.631 [2024-12-10 14:24:42.334730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.566 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.566 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:18.566 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:18.566 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:18.825 NVMe0n1 00:19:18.825 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.825 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82676 00:19:18.825 14:24:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:19.083 Running I/O for 10 seconds... 00:19:20.023 14:24:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.023 7588.00 IOPS, 29.64 MiB/s [2024-12-10T14:24:44.860Z] [2024-12-10 14:24:44.797815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.023 [2024-12-10 14:24:44.798401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.023 [2024-12-10 14:24:44.798515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.023 [2024-12-10 14:24:44.798588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.023 [2024-12-10 14:24:44.798653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.798734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.798796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.798865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.799106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.799307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.799495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.799992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.800213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.800396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.800588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.800749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.800901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.800990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.801134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.801205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.801298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.801384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.801461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.801551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.801643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.801726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.801836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.801910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.802107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.802281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.802499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.802641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.802832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.802908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.803357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.803566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.803721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.803943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.804350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.804534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.804704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.804875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.805094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.805195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.805284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.805393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.805468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.805553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.805643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.805746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.805826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.024 [2024-12-10 14:24:44.805921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.024 [2024-12-10 14:24:44.806029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.806177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.806249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.806343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.806433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.806531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.806709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.806816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.806904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.807009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.807130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.807456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.807581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.807671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.807759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.807942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.808071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.808276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.808370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.808446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.808567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.808666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.808753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.808857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.808948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.809128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.809204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.809324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.809431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.809513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.809619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.809721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.809927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.810032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.810161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.810260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.810365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.810451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.810638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.810717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.810795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.810900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.811108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.811324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.811534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.025 [2024-12-10 14:24:44.811711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.811876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.811955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.812189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.812303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.812418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.812532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.812619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.812718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.812796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.025 [2024-12-10 14:24:44.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.025 [2024-12-10 14:24:44.813359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.026 [2024-12-10 14:24:44.813450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.026 [2024-12-10 14:24:44.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.026 [2024-12-10 14:24:44.813679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.813948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.813956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.026 [2024-12-10 14:24:44.814382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.026 [2024-12-10 14:24:44.814392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.027 [2024-12-10 14:24:44.814401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.027 [2024-12-10 14:24:44.814425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.027 [2024-12-10 14:24:44.814449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.027 [2024-12-10 14:24:44.814468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.027 [2024-12-10 14:24:44.814489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb3690 is same with the state(6) to be set 00:19:20.027 [2024-12-10 14:24:44.814511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.027 [2024-12-10 14:24:44.814523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.027 [2024-12-10 14:24:44.814536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69200 len:8 PRP1 0x0 PRP2 0x0 00:19:20.027 [2024-12-10 14:24:44.814550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.027 [2024-12-10 14:24:44.814737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.027 [2024-12-10 14:24:44.814757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.027 [2024-12-10 14:24:44.814774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.027 [2024-12-10 14:24:44.814796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.027 [2024-12-10 14:24:44.814810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53e50 is same with the state(6) to be set 00:19:20.027 [2024-12-10 14:24:44.818773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:20.027 [2024-12-10 14:24:44.818913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53e50 (9): Bad file descriptor 00:19:20.027 [2024-12-10 14:24:44.819187] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.027 14:24:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:20.027 [2024-12-10 14:24:44.819330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53e50 with addr=10.0.0.3, port=4420 00:19:20.027 [2024-12-10 14:24:44.819450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53e50 is same with the state(6) to be set 00:19:20.027 [2024-12-10 14:24:44.819604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53e50 (9): Bad file descriptor 00:19:20.027 [2024-12-10 14:24:44.819702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:20.027 [2024-12-10 14:24:44.819805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:20.027 [2024-12-10 14:24:44.819892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:20.027 [2024-12-10 14:24:44.820033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:20.027 [2024-12-10 14:24:44.820160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:21.939 4298.50 IOPS, 16.79 MiB/s [2024-12-10T14:24:47.035Z] 2865.67 IOPS, 11.19 MiB/s [2024-12-10T14:24:47.035Z] [2024-12-10 14:24:46.820323] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.198 [2024-12-10 14:24:46.820826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53e50 with addr=10.0.0.3, port=4420 00:19:22.198 [2024-12-10 14:24:46.820875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53e50 is same with the state(6) to be set 00:19:22.198 [2024-12-10 14:24:46.820908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53e50 (9): Bad file descriptor 00:19:22.198 [2024-12-10 14:24:46.820927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:22.198 [2024-12-10 14:24:46.820936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:22.198 [2024-12-10 14:24:46.820946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:22.198 [2024-12-10 14:24:46.820956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:22.198 [2024-12-10 14:24:46.820966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:22.198 14:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:22.198 14:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.198 14:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:22.456 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:22.456 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:22.456 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:22.456 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:22.715 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:22.715 14:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:23.908 2149.25 IOPS, 8.40 MiB/s [2024-12-10T14:24:49.003Z] 1719.40 IOPS, 6.72 MiB/s [2024-12-10T14:24:49.003Z] [2024-12-10 14:24:48.821096] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.166 [2024-12-10 14:24:48.821163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53e50 with addr=10.0.0.3, port=4420 00:19:24.166 [2024-12-10 14:24:48.821178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53e50 is same with the state(6) to be set 00:19:24.166 [2024-12-10 14:24:48.821200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53e50 (9): Bad file descriptor 00:19:24.166 [2024-12-10 14:24:48.821217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:24.166 [2024-12-10 14:24:48.821226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:24.166 [2024-12-10 14:24:48.821235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:24.166 [2024-12-10 14:24:48.821245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:24.166 [2024-12-10 14:24:48.821255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:26.034 1432.83 IOPS, 5.60 MiB/s [2024-12-10T14:24:50.871Z] 1228.14 IOPS, 4.80 MiB/s [2024-12-10T14:24:50.871Z] [2024-12-10 14:24:50.821280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:26.034 [2024-12-10 14:24:50.821481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:26.034 [2024-12-10 14:24:50.821501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:26.034 [2024-12-10 14:24:50.821512] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:26.034 [2024-12-10 14:24:50.821528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:27.227 1074.62 IOPS, 4.20 MiB/s 00:19:27.227 Latency(us) 00:19:27.227 [2024-12-10T14:24:52.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.227 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.227 Verification LBA range: start 0x0 length 0x4000 00:19:27.227 NVMe0n1 : 8.15 1054.56 4.12 15.70 0.00 119614.96 3678.95 7046430.72 00:19:27.227 [2024-12-10T14:24:52.064Z] =================================================================================================================== 00:19:27.227 [2024-12-10T14:24:52.064Z] Total : 1054.56 4.12 15.70 0.00 119614.96 3678.95 7046430.72 00:19:27.227 { 00:19:27.227 "results": [ 00:19:27.227 { 00:19:27.227 "job": "NVMe0n1", 00:19:27.227 "core_mask": "0x4", 00:19:27.227 "workload": "verify", 00:19:27.227 "status": "finished", 00:19:27.227 "verify_range": { 00:19:27.227 "start": 0, 00:19:27.227 "length": 16384 00:19:27.227 }, 00:19:27.227 "queue_depth": 128, 00:19:27.227 "io_size": 4096, 00:19:27.227 "runtime": 8.152225, 00:19:27.227 "iops": 1054.5587247653248, 00:19:27.227 "mibps": 4.11937001861455, 00:19:27.227 "io_failed": 128, 00:19:27.227 "io_timeout": 0, 00:19:27.227 "avg_latency_us": 119614.95687501953, 00:19:27.227 "min_latency_us": 3678.9527272727273, 00:19:27.227 "max_latency_us": 7046430.72 00:19:27.227 } 00:19:27.227 ], 00:19:27.227 "core_count": 1 00:19:27.227 } 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:27.794 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82676 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82652 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82652 ']' 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82652 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82652 00:19:28.361 killing process with pid 82652 00:19:28.361 Received shutdown signal, test time was about 9.293520 seconds 00:19:28.361 00:19:28.361 Latency(us) 00:19:28.361 [2024-12-10T14:24:53.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.361 [2024-12-10T14:24:53.198Z] =================================================================================================================== 00:19:28.361 [2024-12-10T14:24:53.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82652' 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82652 00:19:28.361 14:24:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82652 00:19:28.361 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:28.619 [2024-12-10 14:24:53.299928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.619 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:28.619 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82798 00:19:28.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82798 /var/tmp/bdevperf.sock 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82798 ']' 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.620 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:28.620 [2024-12-10 14:24:53.378388] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:19:28.620 [2024-12-10 14:24:53.378729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82798 ] 00:19:28.878 [2024-12-10 14:24:53.525625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.878 [2024-12-10 14:24:53.554665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.878 [2024-12-10 14:24:53.582040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.878 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.878 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:28.878 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:29.136 14:24:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:29.395 NVMe0n1 00:19:29.395 14:24:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.395 14:24:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82809 00:19:29.395 14:24:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:29.653 Running I/O for 10 seconds... 00:19:30.588 14:24:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:30.849 9680.00 IOPS, 37.81 MiB/s [2024-12-10T14:24:55.686Z] [2024-12-10 14:24:55.485149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.849 [2024-12-10 14:24:55.485505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.849 [2024-12-10 14:24:55.485870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.849 [2024-12-10 14:24:55.485880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.485888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.485906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.485924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.485942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.485961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.485978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.485989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.485997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.850 [2024-12-10 14:24:55.486696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.850 [2024-12-10 14:24:55.486793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.850 [2024-12-10 14:24:55.486804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.486978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.486991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.851 [2024-12-10 14:24:55.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.851 [2024-12-10 14:24:55.487840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.851 [2024-12-10 14:24:55.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.852 [2024-12-10 14:24:55.487867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.852 [2024-12-10 14:24:55.487887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.852 [2024-12-10 14:24:55.487906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.852 [2024-12-10 14:24:55.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.487950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.487981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.487994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.488015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.488061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.488087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.852 [2024-12-10 14:24:55.488110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.852 [2024-12-10 14:24:55.488168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.852 [2024-12-10 14:24:55.488180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90936 len:8 PRP1 0x0 PRP2 0x0 00:19:30.852 [2024-12-10 14:24:55.488194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.852 [2024-12-10 14:24:55.488482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:30.852 [2024-12-10 14:24:55.488573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:30.852 [2024-12-10 14:24:55.488690] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.852 [2024-12-10 14:24:55.488726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:30.852 [2024-12-10 14:24:55.488737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:30.852 [2024-12-10 14:24:55.488754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:30.852 [2024-12-10 14:24:55.488788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:30.852 [2024-12-10 14:24:55.488802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:30.852 [2024-12-10 14:24:55.488814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:30.852 [2024-12-10 14:24:55.488824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:30.852 [2024-12-10 14:24:55.488834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:30.852 14:24:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:31.787 5657.00 IOPS, 22.10 MiB/s [2024-12-10T14:24:56.624Z] [2024-12-10 14:24:56.488935] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.787 [2024-12-10 14:24:56.489026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:31.787 [2024-12-10 14:24:56.489043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:31.787 [2024-12-10 14:24:56.489066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:31.787 [2024-12-10 14:24:56.489084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:31.787 [2024-12-10 14:24:56.489093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:31.787 [2024-12-10 14:24:56.489102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:31.787 [2024-12-10 14:24:56.489129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:31.787 [2024-12-10 14:24:56.489139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:31.787 14:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:32.045 [2024-12-10 14:24:56.766406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.045 14:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82809 00:19:32.870 3771.33 IOPS, 14.73 MiB/s [2024-12-10T14:24:57.707Z] [2024-12-10 14:24:57.509522] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:34.743 2828.50 IOPS, 11.05 MiB/s [2024-12-10T14:25:00.522Z] 3971.40 IOPS, 15.51 MiB/s [2024-12-10T14:25:01.456Z] 5021.33 IOPS, 19.61 MiB/s [2024-12-10T14:25:02.391Z] 5774.86 IOPS, 22.56 MiB/s [2024-12-10T14:25:03.766Z] 6334.00 IOPS, 24.74 MiB/s [2024-12-10T14:25:04.332Z] 6768.00 IOPS, 26.44 MiB/s [2024-12-10T14:25:04.590Z] 7110.40 IOPS, 27.77 MiB/s 00:19:39.753 Latency(us) 00:19:39.753 [2024-12-10T14:25:04.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.753 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.753 Verification LBA range: start 0x0 length 0x4000 00:19:39.753 NVMe0n1 : 10.01 7113.65 27.79 0.00 0.00 17957.82 2770.39 3019898.88 00:19:39.753 [2024-12-10T14:25:04.590Z] =================================================================================================================== 00:19:39.753 [2024-12-10T14:25:04.590Z] Total : 7113.65 27.79 0.00 0.00 17957.82 2770.39 3019898.88 00:19:39.753 { 00:19:39.754 "results": [ 00:19:39.754 { 00:19:39.754 "job": "NVMe0n1", 00:19:39.754 "core_mask": "0x4", 00:19:39.754 "workload": "verify", 00:19:39.754 "status": "finished", 00:19:39.754 "verify_range": { 00:19:39.754 "start": 0, 00:19:39.754 "length": 16384 00:19:39.754 }, 00:19:39.754 "queue_depth": 128, 00:19:39.754 "io_size": 4096, 00:19:39.754 "runtime": 10.010046, 00:19:39.754 "iops": 7113.653623569762, 00:19:39.754 "mibps": 27.787709467069384, 00:19:39.754 "io_failed": 0, 00:19:39.754 "io_timeout": 0, 00:19:39.754 "avg_latency_us": 17957.817332066876, 00:19:39.754 "min_latency_us": 2770.3854545454546, 00:19:39.754 "max_latency_us": 3019898.88 00:19:39.754 } 00:19:39.754 ], 00:19:39.754 "core_count": 1 00:19:39.754 } 00:19:39.754 14:25:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82919 00:19:39.754 14:25:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:39.754 14:25:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.754 Running I/O for 10 seconds... 00:19:40.688 14:25:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:40.948 7829.00 IOPS, 30.58 MiB/s [2024-12-10T14:25:05.785Z] [2024-12-10 14:25:05.627946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.948 [2024-12-10 14:25:05.628196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.948 [2024-12-10 14:25:05.628429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.948 [2024-12-10 14:25:05.628439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.628987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.628995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.949 [2024-12-10 14:25:05.629248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.949 [2024-12-10 14:25:05.629258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.629970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.629997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.630006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.950 [2024-12-10 14:25:05.630016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.950 [2024-12-10 14:25:05.630026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.951 [2024-12-10 14:25:05.630807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.951 [2024-12-10 14:25:05.630827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.630842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2093990 is same with the state(6) to be set 00:19:40.951 [2024-12-10 14:25:05.630856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.951 [2024-12-10 14:25:05.630868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.951 [2024-12-10 14:25:05.630879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:19:40.951 [2024-12-10 14:25:05.630889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.951 [2024-12-10 14:25:05.631197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.951 [2024-12-10 14:25:05.631354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:40.951 [2024-12-10 14:25:05.631474] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.951 [2024-12-10 14:25:05.631502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:40.951 [2024-12-10 14:25:05.631534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:40.951 [2024-12-10 14:25:05.631554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:40.951 [2024-12-10 14:25:05.631569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:40.952 [2024-12-10 14:25:05.631578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:40.952 [2024-12-10 14:25:05.631587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:40.952 [2024-12-10 14:25:05.631597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:40.952 [2024-12-10 14:25:05.631607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.952 14:25:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:41.887 4490.00 IOPS, 17.54 MiB/s [2024-12-10T14:25:06.724Z] [2024-12-10 14:25:06.631713] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.887 [2024-12-10 14:25:06.631937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:41.887 [2024-12-10 14:25:06.631997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:41.887 [2024-12-10 14:25:06.632030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:41.887 [2024-12-10 14:25:06.632050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:41.887 [2024-12-10 14:25:06.632060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:41.887 [2024-12-10 14:25:06.632071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:41.887 [2024-12-10 14:25:06.632083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:41.887 [2024-12-10 14:25:06.632098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:42.823 2993.33 IOPS, 11.69 MiB/s [2024-12-10T14:25:07.660Z] [2024-12-10 14:25:07.632215] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.823 [2024-12-10 14:25:07.632281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:42.823 [2024-12-10 14:25:07.632296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:42.823 [2024-12-10 14:25:07.632318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:42.823 [2024-12-10 14:25:07.632336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:42.823 [2024-12-10 14:25:07.632345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:42.823 [2024-12-10 14:25:07.632355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:42.823 [2024-12-10 14:25:07.632365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:42.823 [2024-12-10 14:25:07.632375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.015 2245.00 IOPS, 8.77 MiB/s [2024-12-10T14:25:08.852Z] [2024-12-10 14:25:08.635651] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.015 [2024-12-10 14:25:08.635700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202be50 with addr=10.0.0.3, port=4420 00:19:44.015 [2024-12-10 14:25:08.635714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202be50 is same with the state(6) to be set 00:19:44.015 [2024-12-10 14:25:08.635933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202be50 (9): Bad file descriptor 00:19:44.015 [2024-12-10 14:25:08.636237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:44.015 [2024-12-10 14:25:08.636252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:44.015 [2024-12-10 14:25:08.636263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:44.015 [2024-12-10 14:25:08.636273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:44.015 [2024-12-10 14:25:08.636283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.015 14:25:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.272 [2024-12-10 14:25:08.915138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:44.272 14:25:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82919 00:19:44.839 1796.00 IOPS, 7.02 MiB/s [2024-12-10T14:25:09.676Z] [2024-12-10 14:25:09.664457] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:46.735 2880.50 IOPS, 11.25 MiB/s [2024-12-10T14:25:12.509Z] 3904.43 IOPS, 15.25 MiB/s [2024-12-10T14:25:13.885Z] 4677.38 IOPS, 18.27 MiB/s [2024-12-10T14:25:14.821Z] 5265.22 IOPS, 20.57 MiB/s [2024-12-10T14:25:14.821Z] 5762.70 IOPS, 22.51 MiB/s 00:19:49.984 Latency(us) 00:19:49.984 [2024-12-10T14:25:14.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.984 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.984 Verification LBA range: start 0x0 length 0x4000 00:19:49.984 NVMe0n1 : 10.01 5770.31 22.54 3968.23 0.00 13119.74 621.85 3019898.88 00:19:49.984 [2024-12-10T14:25:14.821Z] =================================================================================================================== 00:19:49.984 [2024-12-10T14:25:14.821Z] Total : 5770.31 22.54 3968.23 0.00 13119.74 0.00 3019898.88 00:19:49.984 { 00:19:49.984 "results": [ 00:19:49.984 { 00:19:49.984 "job": "NVMe0n1", 00:19:49.984 "core_mask": "0x4", 00:19:49.984 "workload": "verify", 00:19:49.984 "status": "finished", 00:19:49.984 "verify_range": { 00:19:49.984 "start": 0, 00:19:49.984 "length": 16384 00:19:49.984 }, 00:19:49.984 "queue_depth": 128, 00:19:49.984 "io_size": 4096, 00:19:49.984 "runtime": 10.008991, 00:19:49.984 "iops": 5770.311912559418, 00:19:49.984 "mibps": 22.540280908435225, 00:19:49.984 "io_failed": 39718, 00:19:49.984 "io_timeout": 0, 00:19:49.984 "avg_latency_us": 13119.743554233666, 00:19:49.984 "min_latency_us": 621.8472727272728, 00:19:49.984 "max_latency_us": 3019898.88 00:19:49.984 } 00:19:49.984 ], 00:19:49.984 "core_count": 1 00:19:49.984 } 00:19:49.984 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82798 00:19:49.984 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82798 ']' 00:19:49.984 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82798 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82798 00:19:49.985 killing process with pid 82798 00:19:49.985 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.985 00:19:49.985 Latency(us) 00:19:49.985 [2024-12-10T14:25:14.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.985 [2024-12-10T14:25:14.822Z] =================================================================================================================== 00:19:49.985 [2024-12-10T14:25:14.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82798' 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82798 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82798 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=83029 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:49.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 83029 /var/tmp/bdevperf.sock 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83029 ']' 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.985 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:49.985 [2024-12-10 14:25:14.735273] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:19:49.985 [2024-12-10 14:25:14.735587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83029 ] 00:19:50.243 [2024-12-10 14:25:14.880353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.243 [2024-12-10 14:25:14.909903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.243 [2024-12-10 14:25:14.939188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.243 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.243 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:50.243 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:50.243 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=83042 00:19:50.243 14:25:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:50.501 14:25:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:51.068 NVMe0n1 00:19:51.068 14:25:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83078 00:19:51.068 14:25:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.068 14:25:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:51.068 Running I/O for 10 seconds... 00:19:52.005 14:25:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:52.266 16002.00 IOPS, 62.51 MiB/s [2024-12-10T14:25:17.103Z] [2024-12-10 14:25:16.969295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.266 [2024-12-10 14:25:16.969706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.969985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ee10 is same with the state(6) to be set 00:19:52.267 [2024-12-10 14:25:16.970257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-12-10 14:25:16.970778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.267 [2024-12-10 14:25:16.970789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.970981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.970993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.268 [2024-12-10 14:25:16.971707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.268 [2024-12-10 14:25:16.971717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.971961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.971971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.269 [2024-12-10 14:25:16.972617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.269 [2024-12-10 14:25:16.972629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.972981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.972992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.270 [2024-12-10 14:25:16.973121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1011920 is same with the state(6) to be set 00:19:52.270 [2024-12-10 14:25:16.973145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.270 [2024-12-10 14:25:16.973153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.270 [2024-12-10 14:25:16.973161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67448 len:8 PRP1 0x0 PRP2 0x0 00:19:52.270 [2024-12-10 14:25:16.973171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.270 [2024-12-10 14:25:16.973504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:52.270 [2024-12-10 14:25:16.973604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4e50 (9): Bad file descriptor 00:19:52.270 [2024-12-10 14:25:16.973709] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.270 [2024-12-10 14:25:16.973730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa4e50 with addr=10.0.0.3, port=4420 00:19:52.270 [2024-12-10 14:25:16.973741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4e50 is same with the state(6) to be set 00:19:52.270 [2024-12-10 14:25:16.973759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4e50 (9): Bad file descriptor 00:19:52.270 [2024-12-10 14:25:16.973776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:52.270 [2024-12-10 14:25:16.973786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:52.270 [2024-12-10 14:25:16.973796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:52.270 [2024-12-10 14:25:16.973807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:52.270 [2024-12-10 14:25:16.973817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:52.270 14:25:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83078 00:19:54.143 9304.00 IOPS, 36.34 MiB/s [2024-12-10T14:25:18.980Z] 6202.67 IOPS, 24.23 MiB/s [2024-12-10T14:25:18.980Z] [2024-12-10 14:25:18.974004] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.143 [2024-12-10 14:25:18.974070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa4e50 with addr=10.0.0.3, port=4420 00:19:54.143 [2024-12-10 14:25:18.974087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4e50 is same with the state(6) to be set 00:19:54.143 [2024-12-10 14:25:18.974111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4e50 (9): Bad file descriptor 00:19:54.143 [2024-12-10 14:25:18.974131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:54.143 [2024-12-10 14:25:18.974140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:54.143 [2024-12-10 14:25:18.974151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:54.143 [2024-12-10 14:25:18.974162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:54.143 [2024-12-10 14:25:18.974173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:56.016 4652.00 IOPS, 18.17 MiB/s [2024-12-10T14:25:21.112Z] 3721.60 IOPS, 14.54 MiB/s [2024-12-10T14:25:21.112Z] [2024-12-10 14:25:20.974344] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.275 [2024-12-10 14:25:20.974408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa4e50 with addr=10.0.0.3, port=4420 00:19:56.275 [2024-12-10 14:25:20.974423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4e50 is same with the state(6) to be set 00:19:56.275 [2024-12-10 14:25:20.974445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4e50 (9): Bad file descriptor 00:19:56.275 [2024-12-10 14:25:20.974462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:56.275 [2024-12-10 14:25:20.974472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:56.275 [2024-12-10 14:25:20.974482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:56.275 [2024-12-10 14:25:20.974492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:56.275 [2024-12-10 14:25:20.974502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:58.148 3101.33 IOPS, 12.11 MiB/s [2024-12-10T14:25:22.985Z] 2658.29 IOPS, 10.38 MiB/s [2024-12-10T14:25:22.985Z] [2024-12-10 14:25:22.974639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:58.148 [2024-12-10 14:25:22.974684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:58.148 [2024-12-10 14:25:22.974712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:58.148 [2024-12-10 14:25:22.974721] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:58.148 [2024-12-10 14:25:22.974731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:59.345 2326.00 IOPS, 9.09 MiB/s 00:19:59.345 Latency(us) 00:19:59.345 [2024-12-10T14:25:24.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.345 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:59.345 NVMe0n1 : 8.17 2276.26 8.89 15.66 0.00 55784.97 7238.75 7015926.69 00:19:59.345 [2024-12-10T14:25:24.182Z] =================================================================================================================== 00:19:59.345 [2024-12-10T14:25:24.182Z] Total : 2276.26 8.89 15.66 0.00 55784.97 7238.75 7015926.69 00:19:59.345 { 00:19:59.345 "results": [ 00:19:59.345 { 00:19:59.345 "job": "NVMe0n1", 00:19:59.345 "core_mask": "0x4", 00:19:59.345 "workload": "randread", 00:19:59.345 "status": "finished", 00:19:59.345 "queue_depth": 128, 00:19:59.345 "io_size": 4096, 00:19:59.345 "runtime": 8.174831, 00:19:59.345 "iops": 2276.2550075958757, 00:19:59.345 "mibps": 8.89162112342139, 00:19:59.345 "io_failed": 128, 00:19:59.345 "io_timeout": 0, 00:19:59.345 "avg_latency_us": 55784.968148435684, 00:19:59.345 "min_latency_us": 7238.749090909091, 00:19:59.345 "max_latency_us": 7015926.69090909 00:19:59.345 } 00:19:59.345 ], 00:19:59.345 "core_count": 1 00:19:59.345 } 00:19:59.345 14:25:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.345 Attaching 5 probes... 00:19:59.345 1470.918360: reset bdev controller NVMe0 00:19:59.345 1471.083380: reconnect bdev controller NVMe0 00:19:59.345 3471.277156: reconnect delay bdev controller NVMe0 00:19:59.345 3471.327453: reconnect bdev controller NVMe0 00:19:59.345 5471.663256: reconnect delay bdev controller NVMe0 00:19:59.345 5471.681096: reconnect bdev controller NVMe0 00:19:59.345 7472.036150: reconnect delay bdev controller NVMe0 00:19:59.345 7472.053188: reconnect bdev controller NVMe0 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 83042 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 83029 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83029 ']' 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83029 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83029 00:19:59.345 killing process with pid 83029 00:19:59.345 Received shutdown signal, test time was about 8.247513 seconds 00:19:59.345 00:19:59.345 Latency(us) 00:19:59.345 [2024-12-10T14:25:24.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.345 [2024-12-10T14:25:24.182Z] =================================================================================================================== 00:19:59.345 [2024-12-10T14:25:24.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83029' 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83029 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83029 00:19:59.345 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.913 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:59.913 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:59.913 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.913 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:59.913 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.914 rmmod nvme_tcp 00:19:59.914 rmmod nvme_fabrics 00:19:59.914 rmmod nvme_keyring 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82616 ']' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82616 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82616 ']' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82616 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82616 00:19:59.914 killing process with pid 82616 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82616' 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82616 00:19:59.914 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82616 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.173 14:25:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.173 14:25:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:00.173 00:20:00.173 real 0m45.227s 00:20:00.173 user 2m12.950s 00:20:00.173 sys 0m5.214s 00:20:00.173 ************************************ 00:20:00.173 END TEST nvmf_timeout 00:20:00.173 ************************************ 00:20:00.173 14:25:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.173 14:25:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:00.434 14:25:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:00.434 14:25:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:00.434 ************************************ 00:20:00.434 END TEST nvmf_host 00:20:00.434 ************************************ 00:20:00.434 00:20:00.434 real 4m59.978s 00:20:00.434 user 13m5.619s 00:20:00.434 sys 1m7.142s 00:20:00.434 14:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.434 14:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.434 14:25:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:00.434 14:25:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:00.434 ************************************ 00:20:00.434 END TEST nvmf_tcp 00:20:00.434 ************************************ 00:20:00.434 00:20:00.434 real 12m27.215s 00:20:00.434 user 30m1.000s 00:20:00.434 sys 3m5.641s 00:20:00.434 14:25:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.434 14:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.434 14:25:25 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:00.434 14:25:25 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:00.434 14:25:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.434 14:25:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.434 14:25:25 -- common/autotest_common.sh@10 -- # set +x 00:20:00.434 ************************************ 00:20:00.434 START TEST nvmf_dif 00:20:00.434 ************************************ 00:20:00.434 14:25:25 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:00.434 * Looking for test storage... 00:20:00.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:00.434 14:25:25 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:00.434 14:25:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:20:00.434 14:25:25 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:00.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.694 --rc genhtml_branch_coverage=1 00:20:00.694 --rc genhtml_function_coverage=1 00:20:00.694 --rc genhtml_legend=1 00:20:00.694 --rc geninfo_all_blocks=1 00:20:00.694 --rc geninfo_unexecuted_blocks=1 00:20:00.694 00:20:00.694 ' 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:00.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.694 --rc genhtml_branch_coverage=1 00:20:00.694 --rc genhtml_function_coverage=1 00:20:00.694 --rc genhtml_legend=1 00:20:00.694 --rc geninfo_all_blocks=1 00:20:00.694 --rc geninfo_unexecuted_blocks=1 00:20:00.694 00:20:00.694 ' 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:00.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.694 --rc genhtml_branch_coverage=1 00:20:00.694 --rc genhtml_function_coverage=1 00:20:00.694 --rc genhtml_legend=1 00:20:00.694 --rc geninfo_all_blocks=1 00:20:00.694 --rc geninfo_unexecuted_blocks=1 00:20:00.694 00:20:00.694 ' 00:20:00.694 14:25:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:00.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.694 --rc genhtml_branch_coverage=1 00:20:00.694 --rc genhtml_function_coverage=1 00:20:00.694 --rc genhtml_legend=1 00:20:00.694 --rc geninfo_all_blocks=1 00:20:00.694 --rc geninfo_unexecuted_blocks=1 00:20:00.694 00:20:00.694 ' 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.694 14:25:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.694 14:25:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.694 14:25:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.694 14:25:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.694 14:25:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:00.694 14:25:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:00.694 14:25:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.694 14:25:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.695 14:25:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:00.695 14:25:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:00.695 Cannot find device "nvmf_init_br" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:00.695 Cannot find device "nvmf_init_br2" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:00.695 Cannot find device "nvmf_tgt_br" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.695 Cannot find device "nvmf_tgt_br2" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:00.695 Cannot find device "nvmf_init_br" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:00.695 Cannot find device "nvmf_init_br2" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:00.695 Cannot find device "nvmf_tgt_br" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:00.695 Cannot find device "nvmf_tgt_br2" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:00.695 Cannot find device "nvmf_br" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:00.695 Cannot find device "nvmf_init_if" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:00.695 Cannot find device "nvmf_init_if2" 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.695 14:25:25 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:00.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:00.954 00:20:00.954 --- 10.0.0.3 ping statistics --- 00:20:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.954 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:00.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:00.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:20:00.954 00:20:00.954 --- 10.0.0.4 ping statistics --- 00:20:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.954 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:00.954 00:20:00.954 --- 10.0.0.1 ping statistics --- 00:20:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.954 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:00.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:00.954 00:20:00.954 --- 10.0.0.2 ping statistics --- 00:20:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.954 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:00.954 14:25:25 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:01.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.213 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:01.213 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.473 14:25:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:01.473 14:25:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83574 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.473 14:25:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83574 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83574 ']' 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.473 14:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.473 [2024-12-10 14:25:26.176928] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:20:01.473 [2024-12-10 14:25:26.177242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.733 [2024-12-10 14:25:26.330916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.733 [2024-12-10 14:25:26.368274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.733 [2024-12-10 14:25:26.368619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.733 [2024-12-10 14:25:26.368657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.733 [2024-12-10 14:25:26.368667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.733 [2024-12-10 14:25:26.368676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.733 [2024-12-10 14:25:26.369080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.733 [2024-12-10 14:25:26.402817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:01.733 14:25:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 14:25:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.733 14:25:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:01.733 14:25:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 [2024-12-10 14:25:26.505881] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 14:25:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 ************************************ 00:20:01.733 START TEST fio_dif_1_default 00:20:01.733 ************************************ 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 bdev_null0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:01.733 [2024-12-10 14:25:26.554074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.733 { 00:20:01.733 "params": { 00:20:01.733 "name": "Nvme$subsystem", 00:20:01.733 "trtype": "$TEST_TRANSPORT", 00:20:01.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.733 "adrfam": "ipv4", 00:20:01.733 "trsvcid": "$NVMF_PORT", 00:20:01.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.733 "hdgst": ${hdgst:-false}, 00:20:01.733 "ddgst": ${ddgst:-false} 00:20:01.733 }, 00:20:01.733 "method": "bdev_nvme_attach_controller" 00:20:01.733 } 00:20:01.733 EOF 00:20:01.733 )") 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:01.733 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:01.734 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:01.992 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:01.992 14:25:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:01.992 "params": { 00:20:01.992 "name": "Nvme0", 00:20:01.992 "trtype": "tcp", 00:20:01.992 "traddr": "10.0.0.3", 00:20:01.992 "adrfam": "ipv4", 00:20:01.992 "trsvcid": "4420", 00:20:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.992 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.992 "hdgst": false, 00:20:01.992 "ddgst": false 00:20:01.992 }, 00:20:01.992 "method": "bdev_nvme_attach_controller" 00:20:01.992 }' 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.993 14:25:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.993 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:01.993 fio-3.35 00:20:01.993 Starting 1 thread 00:20:14.216 00:20:14.216 filename0: (groupid=0, jobs=1): err= 0: pid=83634: Tue Dec 10 14:25:37 2024 00:20:14.216 read: IOPS=8924, BW=34.9MiB/s (36.6MB/s)(349MiB/10001msec) 00:20:14.216 slat (nsec): min=6276, max=73833, avg=8462.20, stdev=3826.49 00:20:14.216 clat (usec): min=333, max=2839, avg=423.02, stdev=60.54 00:20:14.216 lat (usec): min=339, max=2851, avg=431.48, stdev=61.62 00:20:14.216 clat percentiles (usec): 00:20:14.216 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 375], 00:20:14.216 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 424], 00:20:14.216 | 70.00th=[ 449], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 529], 00:20:14.216 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 676], 99.95th=[ 709], 00:20:14.216 | 99.99th=[ 1106] 00:20:14.216 bw ( KiB/s): min=30530, max=38304, per=100.00%, avg=35777.79, stdev=2650.41, samples=19 00:20:14.216 iops : min= 7632, max= 9576, avg=8944.42, stdev=662.66, samples=19 00:20:14.216 lat (usec) : 500=87.83%, 750=12.14%, 1000=0.02% 00:20:14.216 lat (msec) : 2=0.01%, 4=0.01% 00:20:14.216 cpu : usr=84.59%, sys=13.48%, ctx=63, majf=0, minf=9 00:20:14.216 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.216 issued rwts: total=89256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.216 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:14.216 00:20:14.216 Run status group 0 (all jobs): 00:20:14.216 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=349MiB (366MB), run=10001-10001msec 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 00:20:14.216 real 0m10.944s 00:20:14.216 user 0m9.061s 00:20:14.216 sys 0m1.579s 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 ************************************ 00:20:14.216 END TEST fio_dif_1_default 00:20:14.216 ************************************ 00:20:14.216 14:25:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:14.216 14:25:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.216 14:25:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 ************************************ 00:20:14.216 START TEST fio_dif_1_multi_subsystems 00:20:14.216 ************************************ 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 bdev_null0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 [2024-12-10 14:25:37.552814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 bdev_null1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.216 { 00:20:14.216 "params": { 00:20:14.216 "name": "Nvme$subsystem", 00:20:14.216 "trtype": "$TEST_TRANSPORT", 00:20:14.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.216 "adrfam": "ipv4", 00:20:14.216 "trsvcid": "$NVMF_PORT", 00:20:14.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.216 "hdgst": ${hdgst:-false}, 00:20:14.216 "ddgst": ${ddgst:-false} 00:20:14.216 }, 00:20:14.216 "method": "bdev_nvme_attach_controller" 00:20:14.216 } 00:20:14.216 EOF 00:20:14.216 )") 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:14.216 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.217 { 00:20:14.217 "params": { 00:20:14.217 "name": "Nvme$subsystem", 00:20:14.217 "trtype": "$TEST_TRANSPORT", 00:20:14.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.217 "adrfam": "ipv4", 00:20:14.217 "trsvcid": "$NVMF_PORT", 00:20:14.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.217 "hdgst": ${hdgst:-false}, 00:20:14.217 "ddgst": ${ddgst:-false} 00:20:14.217 }, 00:20:14.217 "method": "bdev_nvme_attach_controller" 00:20:14.217 } 00:20:14.217 EOF 00:20:14.217 )") 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:14.217 "params": { 00:20:14.217 "name": "Nvme0", 00:20:14.217 "trtype": "tcp", 00:20:14.217 "traddr": "10.0.0.3", 00:20:14.217 "adrfam": "ipv4", 00:20:14.217 "trsvcid": "4420", 00:20:14.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:14.217 "hdgst": false, 00:20:14.217 "ddgst": false 00:20:14.217 }, 00:20:14.217 "method": "bdev_nvme_attach_controller" 00:20:14.217 },{ 00:20:14.217 "params": { 00:20:14.217 "name": "Nvme1", 00:20:14.217 "trtype": "tcp", 00:20:14.217 "traddr": "10.0.0.3", 00:20:14.217 "adrfam": "ipv4", 00:20:14.217 "trsvcid": "4420", 00:20:14.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.217 "hdgst": false, 00:20:14.217 "ddgst": false 00:20:14.217 }, 00:20:14.217 "method": "bdev_nvme_attach_controller" 00:20:14.217 }' 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:14.217 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.217 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:14.217 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:14.217 fio-3.35 00:20:14.217 Starting 2 threads 00:20:24.193 00:20:24.193 filename0: (groupid=0, jobs=1): err= 0: pid=83791: Tue Dec 10 14:25:48 2024 00:20:24.193 read: IOPS=4938, BW=19.3MiB/s (20.2MB/s)(193MiB/10001msec) 00:20:24.193 slat (nsec): min=6453, max=71828, avg=13115.45, stdev=5094.23 00:20:24.193 clat (usec): min=572, max=3398, avg=774.04, stdev=85.27 00:20:24.193 lat (usec): min=579, max=3411, avg=787.16, stdev=86.70 00:20:24.193 clat percentiles (usec): 00:20:24.193 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 709], 00:20:24.193 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:20:24.193 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[ 922], 00:20:24.193 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1057], 00:20:24.193 | 99.99th=[ 2147] 00:20:24.193 bw ( KiB/s): min=17376, max=20960, per=50.28%, avg=19865.26, stdev=1053.82, samples=19 00:20:24.193 iops : min= 4344, max= 5240, avg=4966.32, stdev=263.46, samples=19 00:20:24.193 lat (usec) : 750=44.63%, 1000=54.95% 00:20:24.193 lat (msec) : 2=0.40%, 4=0.02% 00:20:24.193 cpu : usr=90.15%, sys=8.42%, ctx=61, majf=0, minf=0 00:20:24.193 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.193 issued rwts: total=49392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.193 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:24.193 filename1: (groupid=0, jobs=1): err= 0: pid=83792: Tue Dec 10 14:25:48 2024 00:20:24.193 read: IOPS=4938, BW=19.3MiB/s (20.2MB/s)(193MiB/10001msec) 00:20:24.193 slat (nsec): min=6346, max=68624, avg=12931.83, stdev=5014.17 00:20:24.193 clat (usec): min=613, max=3331, avg=774.63, stdev=79.45 00:20:24.193 lat (usec): min=629, max=3341, avg=787.56, stdev=80.29 00:20:24.193 clat percentiles (usec): 00:20:24.193 | 1.00th=[ 660], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 709], 00:20:24.193 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:20:24.193 | 70.00th=[ 807], 80.00th=[ 848], 90.00th=[ 881], 95.00th=[ 914], 00:20:24.193 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1057], 00:20:24.193 | 99.99th=[ 2180] 00:20:24.193 bw ( KiB/s): min=17376, max=20960, per=50.28%, avg=19865.26, stdev=1053.82, samples=19 00:20:24.193 iops : min= 4344, max= 5240, avg=4966.32, stdev=263.46, samples=19 00:20:24.193 lat (usec) : 750=46.16%, 1000=53.58% 00:20:24.193 lat (msec) : 2=0.25%, 4=0.02% 00:20:24.193 cpu : usr=90.59%, sys=8.04%, ctx=17, majf=0, minf=0 00:20:24.193 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.193 issued rwts: total=49392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.193 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:24.193 00:20:24.193 Run status group 0 (all jobs): 00:20:24.193 READ: bw=38.6MiB/s (40.5MB/s), 19.3MiB/s-19.3MiB/s (20.2MB/s-20.2MB/s), io=386MiB (405MB), run=10001-10001msec 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.193 00:20:24.193 real 0m11.093s 00:20:24.193 user 0m18.802s 00:20:24.193 sys 0m1.905s 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.193 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.193 ************************************ 00:20:24.193 END TEST fio_dif_1_multi_subsystems 00:20:24.193 ************************************ 00:20:24.193 14:25:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:24.193 14:25:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.194 14:25:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.194 14:25:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.194 ************************************ 00:20:24.194 START TEST fio_dif_rand_params 00:20:24.194 ************************************ 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.194 bdev_null0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.194 [2024-12-10 14:25:48.694890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:24.194 { 00:20:24.194 "params": { 00:20:24.194 "name": "Nvme$subsystem", 00:20:24.194 "trtype": "$TEST_TRANSPORT", 00:20:24.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.194 "adrfam": "ipv4", 00:20:24.194 "trsvcid": "$NVMF_PORT", 00:20:24.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.194 "hdgst": ${hdgst:-false}, 00:20:24.194 "ddgst": ${ddgst:-false} 00:20:24.194 }, 00:20:24.194 "method": "bdev_nvme_attach_controller" 00:20:24.194 } 00:20:24.194 EOF 00:20:24.194 )") 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:24.194 "params": { 00:20:24.194 "name": "Nvme0", 00:20:24.194 "trtype": "tcp", 00:20:24.194 "traddr": "10.0.0.3", 00:20:24.194 "adrfam": "ipv4", 00:20:24.194 "trsvcid": "4420", 00:20:24.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.194 "hdgst": false, 00:20:24.194 "ddgst": false 00:20:24.194 }, 00:20:24.194 "method": "bdev_nvme_attach_controller" 00:20:24.194 }' 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.194 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.194 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:24.194 ... 00:20:24.194 fio-3.35 00:20:24.194 Starting 3 threads 00:20:30.760 00:20:30.760 filename0: (groupid=0, jobs=1): err= 0: pid=83948: Tue Dec 10 14:25:54 2024 00:20:30.760 read: IOPS=266, BW=33.3MiB/s (35.0MB/s)(167MiB/5005msec) 00:20:30.760 slat (nsec): min=6546, max=43780, avg=9800.92, stdev=4405.29 00:20:30.760 clat (usec): min=10455, max=12854, avg=11222.93, stdev=419.45 00:20:30.760 lat (usec): min=10462, max=12869, avg=11232.73, stdev=419.84 00:20:30.760 clat percentiles (usec): 00:20:30.760 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:20:30.760 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:20:30.760 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:20:30.760 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12911], 99.95th=[12911], 00:20:30.760 | 99.99th=[12911] 00:20:30.760 bw ( KiB/s): min=33024, max=35328, per=33.29%, avg=34099.20, stdev=741.96, samples=10 00:20:30.760 iops : min= 258, max= 276, avg=266.40, stdev= 5.80, samples=10 00:20:30.760 lat (msec) : 20=100.00% 00:20:30.760 cpu : usr=91.79%, sys=7.71%, ctx=9, majf=0, minf=0 00:20:30.760 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.760 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.760 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.760 filename0: (groupid=0, jobs=1): err= 0: pid=83949: Tue Dec 10 14:25:54 2024 00:20:30.760 read: IOPS=266, BW=33.4MiB/s (35.0MB/s)(167MiB/5002msec) 00:20:30.760 slat (nsec): min=6871, max=55015, avg=13572.82, stdev=4245.22 00:20:30.760 clat (usec): min=8901, max=13037, avg=11210.40, stdev=429.58 00:20:30.760 lat (usec): min=8913, max=13050, avg=11223.97, stdev=429.96 00:20:30.760 clat percentiles (usec): 00:20:30.760 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:20:30.760 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:20:30.760 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:20:30.760 | 99.00th=[12518], 99.50th=[12518], 99.90th=[13042], 99.95th=[13042], 00:20:30.760 | 99.99th=[13042] 00:20:30.760 bw ( KiB/s): min=33024, max=35328, per=33.29%, avg=34099.20, stdev=741.96, samples=10 00:20:30.760 iops : min= 258, max= 276, avg=266.40, stdev= 5.80, samples=10 00:20:30.760 lat (msec) : 10=0.22%, 20=99.78% 00:20:30.760 cpu : usr=92.00%, sys=7.50%, ctx=16, majf=0, minf=0 00:20:30.760 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.761 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.761 filename0: (groupid=0, jobs=1): err= 0: pid=83950: Tue Dec 10 14:25:54 2024 00:20:30.761 read: IOPS=266, BW=33.4MiB/s (35.0MB/s)(167MiB/5002msec) 00:20:30.761 slat (nsec): min=6885, max=55656, avg=14354.97, stdev=4500.07 00:20:30.761 clat (usec): min=8901, max=12998, avg=11207.28, stdev=428.88 00:20:30.761 lat (usec): min=8914, max=13033, avg=11221.64, stdev=429.34 00:20:30.761 clat percentiles (usec): 00:20:30.761 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:20:30.761 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:20:30.761 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:20:30.761 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13042], 99.95th=[13042], 00:20:30.761 | 99.99th=[13042] 00:20:30.761 bw ( KiB/s): min=33024, max=35328, per=33.30%, avg=34105.80, stdev=731.55, samples=10 00:20:30.761 iops : min= 258, max= 276, avg=266.40, stdev= 5.80, samples=10 00:20:30.761 lat (msec) : 10=0.22%, 20=99.78% 00:20:30.761 cpu : usr=92.04%, sys=7.42%, ctx=45, majf=0, minf=0 00:20:30.761 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.761 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.761 00:20:30.761 Run status group 0 (all jobs): 00:20:30.761 READ: bw=100MiB/s (105MB/s), 33.3MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=501MiB (525MB), run=5002-5005msec 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 bdev_null0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 [2024-12-10 14:25:54.652685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 bdev_null1 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 bdev_null2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.761 { 00:20:30.761 "params": { 00:20:30.761 "name": "Nvme$subsystem", 00:20:30.761 "trtype": "$TEST_TRANSPORT", 00:20:30.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.761 "adrfam": "ipv4", 00:20:30.761 "trsvcid": "$NVMF_PORT", 00:20:30.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.761 "hdgst": ${hdgst:-false}, 00:20:30.761 "ddgst": ${ddgst:-false} 00:20:30.761 }, 00:20:30.761 "method": "bdev_nvme_attach_controller" 00:20:30.761 } 00:20:30.761 EOF 00:20:30.761 )") 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:30.761 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.762 { 00:20:30.762 "params": { 00:20:30.762 "name": "Nvme$subsystem", 00:20:30.762 "trtype": "$TEST_TRANSPORT", 00:20:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.762 "adrfam": "ipv4", 00:20:30.762 "trsvcid": "$NVMF_PORT", 00:20:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.762 "hdgst": ${hdgst:-false}, 00:20:30.762 "ddgst": ${ddgst:-false} 00:20:30.762 }, 00:20:30.762 "method": "bdev_nvme_attach_controller" 00:20:30.762 } 00:20:30.762 EOF 00:20:30.762 )") 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.762 { 00:20:30.762 "params": { 00:20:30.762 "name": "Nvme$subsystem", 00:20:30.762 "trtype": "$TEST_TRANSPORT", 00:20:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.762 "adrfam": "ipv4", 00:20:30.762 "trsvcid": "$NVMF_PORT", 00:20:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.762 "hdgst": ${hdgst:-false}, 00:20:30.762 "ddgst": ${ddgst:-false} 00:20:30.762 }, 00:20:30.762 "method": "bdev_nvme_attach_controller" 00:20:30.762 } 00:20:30.762 EOF 00:20:30.762 )") 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:30.762 "params": { 00:20:30.762 "name": "Nvme0", 00:20:30.762 "trtype": "tcp", 00:20:30.762 "traddr": "10.0.0.3", 00:20:30.762 "adrfam": "ipv4", 00:20:30.762 "trsvcid": "4420", 00:20:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.762 "hdgst": false, 00:20:30.762 "ddgst": false 00:20:30.762 }, 00:20:30.762 "method": "bdev_nvme_attach_controller" 00:20:30.762 },{ 00:20:30.762 "params": { 00:20:30.762 "name": "Nvme1", 00:20:30.762 "trtype": "tcp", 00:20:30.762 "traddr": "10.0.0.3", 00:20:30.762 "adrfam": "ipv4", 00:20:30.762 "trsvcid": "4420", 00:20:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.762 "hdgst": false, 00:20:30.762 "ddgst": false 00:20:30.762 }, 00:20:30.762 "method": "bdev_nvme_attach_controller" 00:20:30.762 },{ 00:20:30.762 "params": { 00:20:30.762 "name": "Nvme2", 00:20:30.762 "trtype": "tcp", 00:20:30.762 "traddr": "10.0.0.3", 00:20:30.762 "adrfam": "ipv4", 00:20:30.762 "trsvcid": "4420", 00:20:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.762 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.762 "hdgst": false, 00:20:30.762 "ddgst": false 00:20:30.762 }, 00:20:30.762 "method": "bdev_nvme_attach_controller" 00:20:30.762 }' 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.762 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.762 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:30.762 ... 00:20:30.762 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:30.762 ... 00:20:30.762 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:30.762 ... 00:20:30.762 fio-3.35 00:20:30.762 Starting 24 threads 00:20:42.972 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84045: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=235, BW=942KiB/s (964kB/s)(9444KiB/10028msec) 00:20:42.972 slat (usec): min=4, max=8025, avg=32.76, stdev=328.30 00:20:42.972 clat (msec): min=22, max=135, avg=67.76, stdev=18.39 00:20:42.972 lat (msec): min=22, max=135, avg=67.79, stdev=18.38 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.972 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:42.972 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 106], 00:20:42.972 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 132], 00:20:42.972 | 99.99th=[ 136] 00:20:42.972 bw ( KiB/s): min= 824, max= 1024, per=4.30%, avg=939.20, stdev=52.98, samples=20 00:20:42.972 iops : min= 206, max= 256, avg=234.75, stdev=13.25, samples=20 00:20:42.972 lat (msec) : 50=22.58%, 100=71.37%, 250=6.06% 00:20:42.972 cpu : usr=35.37%, sys=1.74%, ctx=1104, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84046: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=228, BW=914KiB/s (936kB/s)(9184KiB/10050msec) 00:20:42.972 slat (usec): min=5, max=8023, avg=27.42, stdev=334.08 00:20:42.972 clat (msec): min=16, max=135, avg=69.87, stdev=20.08 00:20:42.972 lat (msec): min=16, max=135, avg=69.90, stdev=20.09 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 51], 00:20:42.972 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:20:42.972 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:20:42.972 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 131], 00:20:42.972 | 99.99th=[ 136] 00:20:42.972 bw ( KiB/s): min= 736, max= 1563, per=4.17%, avg=911.35, stdev=163.55, samples=20 00:20:42.972 iops : min= 184, max= 390, avg=227.80, stdev=40.73, samples=20 00:20:42.972 lat (msec) : 20=2.09%, 50=16.81%, 100=74.00%, 250=7.10% 00:20:42.972 cpu : usr=34.56%, sys=1.97%, ctx=1071, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84047: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=227, BW=912KiB/s (933kB/s)(9168KiB/10058msec) 00:20:42.972 slat (usec): min=8, max=4031, avg=15.45, stdev=84.07 00:20:42.972 clat (usec): min=1553, max=137249, avg=70005.80, stdev=26705.76 00:20:42.972 lat (usec): min=1563, max=137264, avg=70021.25, stdev=26706.02 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 29], 20.00th=[ 54], 00:20:42.972 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:20:42.972 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 112], 00:20:42.972 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:20:42.972 | 99.99th=[ 138] 00:20:42.972 bw ( KiB/s): min= 656, max= 2544, per=4.17%, avg=910.15, stdev=390.34, samples=20 00:20:42.972 iops : min= 164, max= 636, avg=227.50, stdev=97.59, samples=20 00:20:42.972 lat (msec) : 2=0.70%, 4=2.09%, 10=2.09%, 20=4.10%, 50=7.37% 00:20:42.972 lat (msec) : 100=72.34%, 250=11.30% 00:20:42.972 cpu : usr=42.73%, sys=2.22%, ctx=1447, majf=0, minf=0 00:20:42.972 IO depths : 1=0.3%, 2=2.0%, 4=7.1%, 8=74.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=89.7%, 8=8.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84048: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=239, BW=957KiB/s (980kB/s)(9576KiB/10006msec) 00:20:42.972 slat (usec): min=3, max=8028, avg=28.24, stdev=327.33 00:20:42.972 clat (msec): min=5, max=119, avg=66.72, stdev=19.00 00:20:42.972 lat (msec): min=5, max=119, avg=66.75, stdev=19.00 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 48], 00:20:42.972 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:42.972 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 107], 00:20:42.972 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:20:42.972 | 99.99th=[ 121] 00:20:42.972 bw ( KiB/s): min= 792, max= 1024, per=4.32%, avg=943.05, stdev=62.70, samples=19 00:20:42.972 iops : min= 198, max= 256, avg=235.74, stdev=15.66, samples=19 00:20:42.972 lat (msec) : 10=0.29%, 20=0.63%, 50=28.11%, 100=65.75%, 250=5.22% 00:20:42.972 cpu : usr=31.28%, sys=1.65%, ctx=833, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=86.8%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84049: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=240, BW=961KiB/s (984kB/s)(9608KiB/10001msec) 00:20:42.972 slat (usec): min=4, max=8027, avg=24.03, stdev=245.16 00:20:42.972 clat (usec): min=1006, max=127343, avg=66518.68, stdev=20447.33 00:20:42.972 lat (usec): min=1014, max=127352, avg=66542.71, stdev=20443.66 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 3], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:20:42.972 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:42.972 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 106], 00:20:42.972 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 123], 00:20:42.972 | 99.99th=[ 128] 00:20:42.972 bw ( KiB/s): min= 824, max= 1024, per=4.27%, avg=933.79, stdev=46.06, samples=19 00:20:42.972 iops : min= 206, max= 256, avg=233.42, stdev=11.52, samples=19 00:20:42.972 lat (msec) : 2=0.37%, 4=0.92%, 10=0.67%, 20=0.25%, 50=20.65% 00:20:42.972 lat (msec) : 100=70.57%, 250=6.58% 00:20:42.972 cpu : usr=43.77%, sys=2.33%, ctx=1365, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84050: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=230, BW=920KiB/s (942kB/s)(9208KiB/10008msec) 00:20:42.972 slat (usec): min=8, max=8030, avg=27.06, stdev=240.75 00:20:42.972 clat (msec): min=9, max=121, avg=69.40, stdev=18.66 00:20:42.972 lat (msec): min=9, max=121, avg=69.43, stdev=18.66 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:20:42.972 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:20:42.972 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 104], 00:20:42.972 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:20:42.972 | 99.99th=[ 122] 00:20:42.972 bw ( KiB/s): min= 670, max= 1026, per=4.15%, avg=906.11, stdev=86.86, samples=19 00:20:42.972 iops : min= 167, max= 256, avg=226.47, stdev=21.75, samples=19 00:20:42.972 lat (msec) : 10=0.26%, 20=0.43%, 50=17.46%, 100=76.28%, 250=5.56% 00:20:42.972 cpu : usr=42.82%, sys=2.25%, ctx=1587, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=88.4%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84051: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=204, BW=819KiB/s (839kB/s)(8196KiB/10009msec) 00:20:42.972 slat (usec): min=3, max=9024, avg=29.58, stdev=365.18 00:20:42.972 clat (msec): min=9, max=146, avg=77.98, stdev=19.69 00:20:42.972 lat (msec): min=9, max=146, avg=78.01, stdev=19.69 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:20:42.972 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:42.972 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 117], 00:20:42.972 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 146], 00:20:42.972 | 99.99th=[ 146] 00:20:42.972 bw ( KiB/s): min= 640, max= 896, per=3.68%, avg=803.68, stdev=80.16, samples=19 00:20:42.972 iops : min= 160, max= 224, avg=200.89, stdev=20.09, samples=19 00:20:42.972 lat (msec) : 10=0.15%, 20=0.29%, 50=9.13%, 100=78.82%, 250=11.62% 00:20:42.972 cpu : usr=36.24%, sys=2.06%, ctx=1119, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=2.6%, 4=10.9%, 8=71.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=90.7%, 8=6.9%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename0: (groupid=0, jobs=1): err= 0: pid=84052: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=227, BW=911KiB/s (933kB/s)(9128KiB/10018msec) 00:20:42.972 slat (usec): min=4, max=8043, avg=29.40, stdev=335.34 00:20:42.972 clat (msec): min=19, max=138, avg=70.08, stdev=18.49 00:20:42.972 lat (msec): min=19, max=138, avg=70.11, stdev=18.49 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:20:42.972 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:42.972 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:20:42.972 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 138], 00:20:42.972 | 99.99th=[ 138] 00:20:42.972 bw ( KiB/s): min= 768, max= 1024, per=4.15%, avg=906.40, stdev=75.63, samples=20 00:20:42.972 iops : min= 192, max= 256, avg=226.60, stdev=18.91, samples=20 00:20:42.972 lat (msec) : 20=0.26%, 50=18.01%, 100=74.72%, 250=7.01% 00:20:42.972 cpu : usr=35.26%, sys=2.00%, ctx=1088, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=79.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename1: (groupid=0, jobs=1): err= 0: pid=84053: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=236, BW=947KiB/s (970kB/s)(9504KiB/10036msec) 00:20:42.972 slat (usec): min=4, max=8033, avg=30.39, stdev=318.21 00:20:42.972 clat (msec): min=21, max=126, avg=67.41, stdev=19.56 00:20:42.972 lat (msec): min=21, max=126, avg=67.44, stdev=19.56 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 48], 00:20:42.972 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:20:42.972 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 108], 00:20:42.972 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:20:42.972 | 99.99th=[ 128] 00:20:42.972 bw ( KiB/s): min= 776, max= 1264, per=4.32%, avg=943.80, stdev=98.22, samples=20 00:20:42.972 iops : min= 194, max= 316, avg=235.90, stdev=24.56, samples=20 00:20:42.972 lat (msec) : 50=25.38%, 100=67.55%, 250=7.07% 00:20:42.972 cpu : usr=38.48%, sys=2.17%, ctx=1133, majf=0, minf=9 00:20:42.972 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename1: (groupid=0, jobs=1): err= 0: pid=84054: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=223, BW=894KiB/s (915kB/s)(8996KiB/10064msec) 00:20:42.972 slat (usec): min=4, max=8023, avg=20.69, stdev=199.32 00:20:42.972 clat (msec): min=2, max=144, avg=71.34, stdev=21.80 00:20:42.972 lat (msec): min=2, max=144, avg=71.36, stdev=21.80 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 53], 00:20:42.972 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:20:42.972 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 108], 00:20:42.972 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 134], 00:20:42.972 | 99.99th=[ 144] 00:20:42.972 bw ( KiB/s): min= 656, max= 1680, per=4.09%, avg=893.10, stdev=198.89, samples=20 00:20:42.972 iops : min= 164, max= 420, avg=223.25, stdev=49.73, samples=20 00:20:42.972 lat (msec) : 4=0.62%, 10=0.89%, 20=1.96%, 50=13.52%, 100=74.17% 00:20:42.972 lat (msec) : 250=8.85% 00:20:42.972 cpu : usr=36.31%, sys=2.29%, ctx=1210, majf=0, minf=0 00:20:42.972 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=77.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:42.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.972 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.972 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.972 filename1: (groupid=0, jobs=1): err= 0: pid=84055: Tue Dec 10 14:26:05 2024 00:20:42.972 read: IOPS=223, BW=895KiB/s (916kB/s)(8984KiB/10042msec) 00:20:42.972 slat (usec): min=3, max=8027, avg=28.00, stdev=318.94 00:20:42.972 clat (msec): min=18, max=143, avg=71.29, stdev=18.29 00:20:42.972 lat (msec): min=18, max=143, avg=71.32, stdev=18.30 00:20:42.972 clat percentiles (msec): 00:20:42.972 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 51], 00:20:42.972 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:20:42.972 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:42.972 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 134], 00:20:42.972 | 99.99th=[ 144] 00:20:42.972 bw ( KiB/s): min= 768, max= 992, per=4.09%, avg=894.40, stdev=62.60, samples=20 00:20:42.972 iops : min= 192, max= 248, avg=223.60, stdev=15.65, samples=20 00:20:42.972 lat (msec) : 20=0.09%, 50=18.52%, 100=74.49%, 250=6.90% 00:20:42.972 cpu : usr=31.20%, sys=1.62%, ctx=838, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename1: (groupid=0, jobs=1): err= 0: pid=84056: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=235, BW=941KiB/s (963kB/s)(9412KiB/10004msec) 00:20:42.973 slat (usec): min=3, max=8026, avg=29.11, stdev=330.08 00:20:42.973 clat (msec): min=11, max=130, avg=67.90, stdev=19.21 00:20:42.973 lat (msec): min=11, max=130, avg=67.93, stdev=19.22 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.973 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:20:42.973 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 131], 99.95th=[ 131], 00:20:42.973 | 99.99th=[ 131] 00:20:42.973 bw ( KiB/s): min= 768, max= 1024, per=4.26%, avg=930.00, stdev=75.17, samples=19 00:20:42.973 iops : min= 192, max= 256, avg=232.47, stdev=18.78, samples=19 00:20:42.973 lat (msec) : 20=0.25%, 50=22.40%, 100=69.91%, 250=7.44% 00:20:42.973 cpu : usr=37.16%, sys=2.01%, ctx=1218, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename1: (groupid=0, jobs=1): err= 0: pid=84057: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=230, BW=923KiB/s (945kB/s)(9236KiB/10004msec) 00:20:42.973 slat (usec): min=3, max=8025, avg=17.92, stdev=166.80 00:20:42.973 clat (msec): min=5, max=132, avg=69.22, stdev=19.57 00:20:42.973 lat (msec): min=5, max=132, avg=69.24, stdev=19.56 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.973 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:20:42.973 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:20:42.973 | 99.99th=[ 132] 00:20:42.973 bw ( KiB/s): min= 670, max= 1024, per=4.15%, avg=906.00, stdev=87.20, samples=19 00:20:42.973 iops : min= 167, max= 256, avg=226.47, stdev=21.87, samples=19 00:20:42.973 lat (msec) : 10=0.56%, 20=0.39%, 50=20.74%, 100=71.98%, 250=6.32% 00:20:42.973 cpu : usr=38.44%, sys=2.02%, ctx=1117, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename1: (groupid=0, jobs=1): err= 0: pid=84058: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=233, BW=933KiB/s (956kB/s)(9372KiB/10041msec) 00:20:42.973 slat (usec): min=8, max=8024, avg=24.93, stdev=256.34 00:20:42.973 clat (msec): min=14, max=128, avg=68.38, stdev=19.36 00:20:42.973 lat (msec): min=14, max=128, avg=68.41, stdev=19.36 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 17], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.973 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:20:42.973 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 105], 00:20:42.973 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:20:42.973 | 99.99th=[ 129] 00:20:42.973 bw ( KiB/s): min= 792, max= 1440, per=4.27%, avg=933.20, stdev=134.22, samples=20 00:20:42.973 iops : min= 198, max= 360, avg=233.30, stdev=33.56, samples=20 00:20:42.973 lat (msec) : 20=1.37%, 50=19.85%, 100=72.64%, 250=6.15% 00:20:42.973 cpu : usr=37.48%, sys=1.79%, ctx=1213, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename1: (groupid=0, jobs=1): err= 0: pid=84059: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=236, BW=947KiB/s (969kB/s)(9480KiB/10013msec) 00:20:42.973 slat (nsec): min=4724, max=37966, avg=15409.93, stdev=5085.98 00:20:42.973 clat (msec): min=13, max=119, avg=67.51, stdev=18.53 00:20:42.973 lat (msec): min=13, max=119, avg=67.53, stdev=18.53 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.973 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:42.973 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 105], 00:20:42.973 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:20:42.973 | 99.99th=[ 121] 00:20:42.973 bw ( KiB/s): min= 824, max= 1072, per=4.32%, avg=943.60, stdev=57.72, samples=20 00:20:42.973 iops : min= 206, max= 268, avg=235.90, stdev=14.43, samples=20 00:20:42.973 lat (msec) : 20=0.42%, 50=22.03%, 100=71.94%, 250=5.61% 00:20:42.973 cpu : usr=36.40%, sys=2.06%, ctx=1112, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename1: (groupid=0, jobs=1): err= 0: pid=84060: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10002msec) 00:20:42.973 slat (usec): min=4, max=8026, avg=23.04, stdev=235.18 00:20:42.973 clat (msec): min=3, max=131, avg=68.87, stdev=20.44 00:20:42.973 lat (msec): min=3, max=131, avg=68.89, stdev=20.44 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 20], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:20:42.973 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:20:42.973 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:20:42.973 | 99.99th=[ 132] 00:20:42.973 bw ( KiB/s): min= 656, max= 1024, per=4.16%, avg=909.79, stdev=83.82, samples=19 00:20:42.973 iops : min= 164, max= 256, avg=227.42, stdev=20.94, samples=19 00:20:42.973 lat (msec) : 4=0.26%, 10=0.39%, 20=0.56%, 50=18.79%, 100=72.11% 00:20:42.973 lat (msec) : 250=7.89% 00:20:42.973 cpu : usr=40.35%, sys=2.36%, ctx=1330, majf=0, minf=10 00:20:42.973 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=80.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84061: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=221, BW=885KiB/s (906kB/s)(8900KiB/10054msec) 00:20:42.973 slat (usec): min=4, max=8023, avg=17.08, stdev=169.88 00:20:42.973 clat (msec): min=6, max=144, avg=72.06, stdev=21.08 00:20:42.973 lat (msec): min=6, max=144, avg=72.08, stdev=21.08 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 57], 00:20:42.973 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:20:42.973 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:20:42.973 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 132], 00:20:42.973 | 99.99th=[ 144] 00:20:42.973 bw ( KiB/s): min= 656, max= 1536, per=4.06%, avg=886.30, stdev=168.81, samples=20 00:20:42.973 iops : min= 164, max= 384, avg=221.55, stdev=42.21, samples=20 00:20:42.973 lat (msec) : 10=1.44%, 20=1.44%, 50=14.47%, 100=73.80%, 250=8.85% 00:20:42.973 cpu : usr=31.29%, sys=1.56%, ctx=850, majf=0, minf=9 00:20:42.973 IO depths : 1=0.2%, 2=1.3%, 4=4.6%, 8=77.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84062: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=221, BW=888KiB/s (909kB/s)(8924KiB/10054msec) 00:20:42.973 slat (usec): min=6, max=8025, avg=31.67, stdev=356.18 00:20:42.973 clat (msec): min=5, max=140, avg=71.87, stdev=20.58 00:20:42.973 lat (msec): min=5, max=140, avg=71.91, stdev=20.58 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 57], 00:20:42.973 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:42.973 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:20:42.973 | 99.99th=[ 142] 00:20:42.973 bw ( KiB/s): min= 656, max= 1523, per=4.05%, avg=885.35, stdev=166.76, samples=20 00:20:42.973 iops : min= 164, max= 380, avg=221.30, stdev=41.54, samples=20 00:20:42.973 lat (msec) : 10=0.09%, 20=2.78%, 50=12.33%, 100=76.74%, 250=8.07% 00:20:42.973 cpu : usr=37.67%, sys=2.45%, ctx=1142, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=1.7%, 4=7.0%, 8=75.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84063: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=231, BW=926KiB/s (949kB/s)(9296KiB/10034msec) 00:20:42.973 slat (usec): min=8, max=8024, avg=27.14, stdev=305.87 00:20:42.973 clat (msec): min=13, max=134, avg=68.92, stdev=20.23 00:20:42.973 lat (msec): min=13, max=134, avg=68.94, stdev=20.23 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 18], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 50], 00:20:42.973 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:20:42.973 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:42.973 | 99.99th=[ 136] 00:20:42.973 bw ( KiB/s): min= 792, max= 1552, per=4.24%, avg=925.50, stdev=152.44, samples=20 00:20:42.973 iops : min= 198, max= 388, avg=231.35, stdev=38.11, samples=20 00:20:42.973 lat (msec) : 20=1.29%, 50=19.58%, 100=72.50%, 250=6.63% 00:20:42.973 cpu : usr=34.08%, sys=1.75%, ctx=931, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84064: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=221, BW=886KiB/s (907kB/s)(8900KiB/10048msec) 00:20:42.973 slat (usec): min=3, max=8029, avg=16.94, stdev=170.01 00:20:42.973 clat (msec): min=13, max=140, avg=72.08, stdev=21.73 00:20:42.973 lat (msec): min=13, max=140, avg=72.10, stdev=21.73 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 57], 00:20:42.973 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:42.973 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:20:42.973 | 99.00th=[ 121], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 140], 00:20:42.973 | 99.99th=[ 140] 00:20:42.973 bw ( KiB/s): min= 656, max= 1641, per=4.04%, avg=883.25, stdev=191.32, samples=20 00:20:42.973 iops : min= 164, max= 410, avg=220.80, stdev=47.78, samples=20 00:20:42.973 lat (msec) : 20=3.51%, 50=13.35%, 100=75.19%, 250=7.96% 00:20:42.973 cpu : usr=33.32%, sys=2.11%, ctx=917, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84065: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=230, BW=920KiB/s (942kB/s)(9216KiB/10015msec) 00:20:42.973 slat (usec): min=4, max=9021, avg=25.22, stdev=238.69 00:20:42.973 clat (msec): min=23, max=132, avg=69.37, stdev=18.60 00:20:42.973 lat (msec): min=23, max=132, avg=69.40, stdev=18.61 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:20:42.973 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:20:42.973 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:20:42.973 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 129], 00:20:42.973 | 99.99th=[ 133] 00:20:42.973 bw ( KiB/s): min= 720, max= 1024, per=4.20%, avg=917.40, stdev=70.35, samples=20 00:20:42.973 iops : min= 180, max= 256, avg=229.35, stdev=17.59, samples=20 00:20:42.973 lat (msec) : 50=19.40%, 100=73.44%, 250=7.16% 00:20:42.973 cpu : usr=41.37%, sys=2.16%, ctx=1382, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84066: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=228, BW=914KiB/s (936kB/s)(9180KiB/10041msec) 00:20:42.973 slat (usec): min=5, max=11028, avg=23.68, stdev=284.31 00:20:42.973 clat (msec): min=15, max=143, avg=69.79, stdev=20.27 00:20:42.973 lat (msec): min=15, max=143, avg=69.81, stdev=20.27 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 51], 00:20:42.973 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:42.973 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:20:42.973 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 136], 00:20:42.973 | 99.99th=[ 144] 00:20:42.973 bw ( KiB/s): min= 704, max= 1408, per=4.19%, avg=914.40, stdev=135.44, samples=20 00:20:42.973 iops : min= 176, max= 352, avg=228.60, stdev=33.86, samples=20 00:20:42.973 lat (msec) : 20=1.39%, 50=17.69%, 100=73.46%, 250=7.45% 00:20:42.973 cpu : usr=36.60%, sys=2.24%, ctx=1131, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84067: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=227, BW=909KiB/s (931kB/s)(9128KiB/10039msec) 00:20:42.973 slat (usec): min=4, max=9962, avg=27.38, stdev=298.67 00:20:42.973 clat (msec): min=16, max=144, avg=70.22, stdev=18.87 00:20:42.973 lat (msec): min=16, max=144, avg=70.25, stdev=18.87 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:20:42.973 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:20:42.973 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:20:42.973 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 136], 00:20:42.973 | 99.99th=[ 144] 00:20:42.973 bw ( KiB/s): min= 752, max= 1280, per=4.15%, avg=906.30, stdev=107.96, samples=20 00:20:42.973 iops : min= 188, max= 320, avg=226.55, stdev=26.97, samples=20 00:20:42.973 lat (msec) : 20=0.18%, 50=15.25%, 100=77.04%, 250=7.54% 00:20:42.973 cpu : usr=40.22%, sys=2.40%, ctx=1593, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=1.3%, 4=4.9%, 8=78.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 filename2: (groupid=0, jobs=1): err= 0: pid=84068: Tue Dec 10 14:26:05 2024 00:20:42.973 read: IOPS=209, BW=837KiB/s (858kB/s)(8420KiB/10054msec) 00:20:42.973 slat (usec): min=8, max=3965, avg=15.22, stdev=86.30 00:20:42.973 clat (msec): min=10, max=156, avg=76.25, stdev=22.62 00:20:42.973 lat (msec): min=10, max=156, avg=76.26, stdev=22.62 00:20:42.973 clat percentiles (msec): 00:20:42.973 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 49], 20.00th=[ 62], 00:20:42.973 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:20:42.973 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 120], 00:20:42.973 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:20:42.973 | 99.99th=[ 157] 00:20:42.973 bw ( KiB/s): min= 640, max= 1523, per=3.82%, avg=834.95, stdev=185.98, samples=20 00:20:42.973 iops : min= 160, max= 380, avg=208.70, stdev=46.35, samples=20 00:20:42.973 lat (msec) : 20=2.28%, 50=9.98%, 100=75.20%, 250=12.54% 00:20:42.973 cpu : usr=33.97%, sys=2.09%, ctx=1016, majf=0, minf=9 00:20:42.973 IO depths : 1=0.1%, 2=2.7%, 4=11.4%, 8=70.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:42.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.973 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.973 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.973 00:20:42.973 Run status group 0 (all jobs): 00:20:42.973 READ: bw=21.3MiB/s (22.4MB/s), 819KiB/s-961KiB/s (839kB/s-984kB/s), io=215MiB (225MB), run=10001-10064msec 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 bdev_null0 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 [2024-12-10 14:26:05.957674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 bdev_null1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.974 { 00:20:42.974 "params": { 00:20:42.974 "name": "Nvme$subsystem", 00:20:42.974 "trtype": "$TEST_TRANSPORT", 00:20:42.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.974 "adrfam": "ipv4", 00:20:42.974 "trsvcid": "$NVMF_PORT", 00:20:42.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.974 "hdgst": ${hdgst:-false}, 00:20:42.974 "ddgst": ${ddgst:-false} 00:20:42.974 }, 00:20:42.974 "method": "bdev_nvme_attach_controller" 00:20:42.974 } 00:20:42.974 EOF 00:20:42.974 )") 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.974 14:26:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.974 { 00:20:42.974 "params": { 00:20:42.974 "name": "Nvme$subsystem", 00:20:42.974 "trtype": "$TEST_TRANSPORT", 00:20:42.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.974 "adrfam": "ipv4", 00:20:42.974 "trsvcid": "$NVMF_PORT", 00:20:42.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.974 "hdgst": ${hdgst:-false}, 00:20:42.974 "ddgst": ${ddgst:-false} 00:20:42.974 }, 00:20:42.974 "method": "bdev_nvme_attach_controller" 00:20:42.974 } 00:20:42.974 EOF 00:20:42.974 )") 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:42.974 "params": { 00:20:42.974 "name": "Nvme0", 00:20:42.974 "trtype": "tcp", 00:20:42.974 "traddr": "10.0.0.3", 00:20:42.974 "adrfam": "ipv4", 00:20:42.974 "trsvcid": "4420", 00:20:42.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.974 "hdgst": false, 00:20:42.974 "ddgst": false 00:20:42.974 }, 00:20:42.974 "method": "bdev_nvme_attach_controller" 00:20:42.974 },{ 00:20:42.974 "params": { 00:20:42.974 "name": "Nvme1", 00:20:42.974 "trtype": "tcp", 00:20:42.974 "traddr": "10.0.0.3", 00:20:42.974 "adrfam": "ipv4", 00:20:42.974 "trsvcid": "4420", 00:20:42.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.974 "hdgst": false, 00:20:42.974 "ddgst": false 00:20:42.974 }, 00:20:42.974 "method": "bdev_nvme_attach_controller" 00:20:42.974 }' 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.974 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.974 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:42.974 ... 00:20:42.974 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:42.974 ... 00:20:42.974 fio-3.35 00:20:42.974 Starting 4 threads 00:20:47.165 00:20:47.165 filename0: (groupid=0, jobs=1): err= 0: pid=84209: Tue Dec 10 14:26:11 2024 00:20:47.165 read: IOPS=2092, BW=16.3MiB/s (17.1MB/s)(81.8MiB/5002msec) 00:20:47.165 slat (nsec): min=3446, max=55733, avg=13934.98, stdev=4167.14 00:20:47.165 clat (usec): min=907, max=5733, avg=3769.14, stdev=349.55 00:20:47.165 lat (usec): min=915, max=5776, avg=3783.08, stdev=349.64 00:20:47.165 clat percentiles (usec): 00:20:47.165 | 1.00th=[ 2114], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:20:47.165 | 30.00th=[ 3720], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:47.165 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4047], 95.00th=[ 4228], 00:20:47.165 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[ 5342], 00:20:47.165 | 99.99th=[ 5538] 00:20:47.165 bw ( KiB/s): min=16144, max=17584, per=25.10%, avg=16742.89, stdev=370.79, samples=9 00:20:47.165 iops : min= 2018, max= 2198, avg=2092.78, stdev=46.35, samples=9 00:20:47.165 lat (usec) : 1000=0.11% 00:20:47.165 lat (msec) : 2=0.68%, 4=87.24%, 10=11.98% 00:20:47.165 cpu : usr=91.64%, sys=7.54%, ctx=12, majf=0, minf=10 00:20:47.165 IO depths : 1=0.1%, 2=23.6%, 4=50.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 issued rwts: total=10468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.165 filename0: (groupid=0, jobs=1): err= 0: pid=84210: Tue Dec 10 14:26:11 2024 00:20:47.165 read: IOPS=2113, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5004msec) 00:20:47.165 slat (nsec): min=6405, max=57940, avg=13855.85, stdev=4435.79 00:20:47.165 clat (usec): min=911, max=7024, avg=3733.11, stdev=401.21 00:20:47.165 lat (usec): min=920, max=7039, avg=3746.96, stdev=401.48 00:20:47.165 clat percentiles (usec): 00:20:47.165 | 1.00th=[ 1991], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3687], 00:20:47.165 | 30.00th=[ 3720], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:47.165 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4178], 00:20:47.165 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 4883], 99.95th=[ 6783], 00:20:47.165 | 99.99th=[ 6783] 00:20:47.165 bw ( KiB/s): min=16640, max=18528, per=25.40%, avg=16938.44, stdev=604.49, samples=9 00:20:47.165 iops : min= 2080, max= 2316, avg=2117.22, stdev=75.59, samples=9 00:20:47.165 lat (usec) : 1000=0.14% 00:20:47.165 lat (msec) : 2=0.89%, 4=88.56%, 10=10.41% 00:20:47.165 cpu : usr=91.76%, sys=7.40%, ctx=6, majf=0, minf=9 00:20:47.165 IO depths : 1=0.1%, 2=22.7%, 4=51.4%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 issued rwts: total=10577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.165 filename1: (groupid=0, jobs=1): err= 0: pid=84211: Tue Dec 10 14:26:11 2024 00:20:47.165 read: IOPS=2055, BW=16.1MiB/s (16.8MB/s)(80.3MiB/5001msec) 00:20:47.165 slat (nsec): min=7000, max=58665, avg=14305.34, stdev=4129.65 00:20:47.165 clat (usec): min=2768, max=5663, avg=3834.94, stdev=260.66 00:20:47.165 lat (usec): min=2775, max=5710, avg=3849.25, stdev=260.96 00:20:47.165 clat percentiles (usec): 00:20:47.165 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:20:47.165 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:47.165 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4113], 95.00th=[ 4424], 00:20:47.165 | 99.00th=[ 4883], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5473], 00:20:47.165 | 99.99th=[ 5538] 00:20:47.165 bw ( KiB/s): min=14605, max=16768, per=24.61%, avg=16411.89, stdev=705.06, samples=9 00:20:47.165 iops : min= 1825, max= 2096, avg=2051.33, stdev=88.29, samples=9 00:20:47.165 lat (msec) : 4=86.18%, 10=13.82% 00:20:47.165 cpu : usr=90.92%, sys=8.28%, ctx=7, majf=0, minf=0 00:20:47.165 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.165 issued rwts: total=10280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.165 filename1: (groupid=0, jobs=1): err= 0: pid=84212: Tue Dec 10 14:26:11 2024 00:20:47.165 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:20:47.165 slat (nsec): min=7037, max=56485, avg=14423.13, stdev=4587.54 00:20:47.165 clat (usec): min=906, max=7077, avg=3793.28, stdev=324.13 00:20:47.165 lat (usec): min=914, max=7092, avg=3807.70, stdev=324.36 00:20:47.165 clat percentiles (usec): 00:20:47.165 | 1.00th=[ 2278], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:20:47.165 | 30.00th=[ 3720], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:20:47.165 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4047], 95.00th=[ 4359], 00:20:47.165 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5473], 00:20:47.165 | 99.99th=[ 5669] 00:20:47.165 bw ( KiB/s): min=15470, max=17040, per=24.90%, avg=16607.56, stdev=449.85, samples=9 00:20:47.165 iops : min= 1933, max= 2130, avg=2075.78, stdev=56.47, samples=9 00:20:47.166 lat (usec) : 1000=0.06% 00:20:47.166 lat (msec) : 2=0.21%, 4=87.56%, 10=12.17% 00:20:47.166 cpu : usr=90.72%, sys=8.48%, ctx=4, majf=0, minf=0 00:20:47.166 IO depths : 1=0.1%, 2=24.1%, 4=50.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.166 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.166 issued rwts: total=10390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.166 00:20:47.166 Run status group 0 (all jobs): 00:20:47.166 READ: bw=65.1MiB/s (68.3MB/s), 16.1MiB/s-16.5MiB/s (16.8MB/s-17.3MB/s), io=326MiB (342MB), run=5001-5004msec 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 ************************************ 00:20:47.166 END TEST fio_dif_rand_params 00:20:47.166 ************************************ 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 00:20:47.166 real 0m23.254s 00:20:47.166 user 2m3.143s 00:20:47.166 sys 0m8.399s 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 14:26:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:47.166 14:26:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.166 14:26:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 ************************************ 00:20:47.166 START TEST fio_dif_digest 00:20:47.166 ************************************ 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 bdev_null0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.166 14:26:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:47.426 [2024-12-10 14:26:12.009801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.426 { 00:20:47.426 "params": { 00:20:47.426 "name": "Nvme$subsystem", 00:20:47.426 "trtype": "$TEST_TRANSPORT", 00:20:47.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.426 "adrfam": "ipv4", 00:20:47.426 "trsvcid": "$NVMF_PORT", 00:20:47.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.426 "hdgst": ${hdgst:-false}, 00:20:47.426 "ddgst": ${ddgst:-false} 00:20:47.426 }, 00:20:47.426 "method": "bdev_nvme_attach_controller" 00:20:47.426 } 00:20:47.426 EOF 00:20:47.426 )") 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.426 "params": { 00:20:47.426 "name": "Nvme0", 00:20:47.426 "trtype": "tcp", 00:20:47.426 "traddr": "10.0.0.3", 00:20:47.426 "adrfam": "ipv4", 00:20:47.426 "trsvcid": "4420", 00:20:47.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.426 "hdgst": true, 00:20:47.426 "ddgst": true 00:20:47.426 }, 00:20:47.426 "method": "bdev_nvme_attach_controller" 00:20:47.426 }' 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.426 14:26:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.426 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:47.426 ... 00:20:47.426 fio-3.35 00:20:47.426 Starting 3 threads 00:20:59.638 00:20:59.638 filename0: (groupid=0, jobs=1): err= 0: pid=84318: Tue Dec 10 14:26:22 2024 00:20:59.638 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10005msec) 00:20:59.638 slat (nsec): min=6935, max=43108, avg=9788.61, stdev=3711.52 00:20:59.638 clat (usec): min=7430, max=15108, avg=12466.47, stdev=569.34 00:20:59.638 lat (usec): min=7437, max=15121, avg=12476.26, stdev=569.75 00:20:59.638 clat percentiles (usec): 00:20:59.638 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:20:59.638 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:59.638 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:20:59.638 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15139], 99.95th=[15139], 00:20:59.638 | 99.99th=[15139] 00:20:59.638 bw ( KiB/s): min=29184, max=31488, per=33.31%, avg=30720.00, stdev=787.95, samples=20 00:20:59.638 iops : min= 228, max= 246, avg=240.00, stdev= 6.16, samples=20 00:20:59.638 lat (msec) : 10=0.12%, 20=99.88% 00:20:59.638 cpu : usr=91.52%, sys=7.97%, ctx=20, majf=0, minf=0 00:20:59.638 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.638 filename0: (groupid=0, jobs=1): err= 0: pid=84319: Tue Dec 10 14:26:22 2024 00:20:59.638 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10007msec) 00:20:59.638 slat (nsec): min=7176, max=68133, avg=13697.80, stdev=3941.98 00:20:59.638 clat (usec): min=8811, max=15124, avg=12462.86, stdev=564.20 00:20:59.638 lat (usec): min=8818, max=15140, avg=12476.55, stdev=564.54 00:20:59.638 clat percentiles (usec): 00:20:59.638 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:20:59.638 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:59.638 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[13698], 00:20:59.638 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15139], 99.95th=[15139], 00:20:59.638 | 99.99th=[15139] 00:20:59.638 bw ( KiB/s): min=28416, max=32256, per=33.32%, avg=30720.00, stdev=965.04, samples=20 00:20:59.638 iops : min= 222, max= 252, avg=240.00, stdev= 7.54, samples=20 00:20:59.638 lat (msec) : 10=0.12%, 20=99.88% 00:20:59.638 cpu : usr=90.62%, sys=8.89%, ctx=5, majf=0, minf=0 00:20:59.638 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.638 filename0: (groupid=0, jobs=1): err= 0: pid=84320: Tue Dec 10 14:26:22 2024 00:20:59.638 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10007msec) 00:20:59.638 slat (nsec): min=6888, max=43356, avg=14276.40, stdev=3950.55 00:20:59.638 clat (usec): min=8804, max=15124, avg=12460.60, stdev=562.78 00:20:59.638 lat (usec): min=8811, max=15140, avg=12474.88, stdev=563.30 00:20:59.638 clat percentiles (usec): 00:20:59.638 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:20:59.638 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:59.638 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13698], 00:20:59.638 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15139], 99.95th=[15139], 00:20:59.638 | 99.99th=[15139] 00:20:59.638 bw ( KiB/s): min=28416, max=32256, per=33.32%, avg=30722.90, stdev=960.26, samples=20 00:20:59.638 iops : min= 222, max= 252, avg=240.00, stdev= 7.54, samples=20 00:20:59.638 lat (msec) : 10=0.12%, 20=99.88% 00:20:59.638 cpu : usr=91.50%, sys=7.97%, ctx=100, majf=0, minf=0 00:20:59.638 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.638 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.638 00:20:59.638 Run status group 0 (all jobs): 00:20:59.638 READ: bw=90.0MiB/s (94.4MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=901MiB (945MB), run=10005-10007msec 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.638 00:20:59.638 real 0m10.912s 00:20:59.638 user 0m28.002s 00:20:59.638 sys 0m2.693s 00:20:59.638 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.638 ************************************ 00:20:59.638 END TEST fio_dif_digest 00:20:59.638 ************************************ 00:20:59.639 14:26:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.639 14:26:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:59.639 14:26:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.639 14:26:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.639 rmmod nvme_tcp 00:20:59.639 rmmod nvme_fabrics 00:20:59.639 rmmod nvme_keyring 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83574 ']' 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83574 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83574 ']' 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83574 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83574 00:20:59.639 killing process with pid 83574 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83574' 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83574 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83574 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:59.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.639 Waiting for block devices as requested 00:20:59.639 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.639 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.639 14:26:23 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:59.639 00:20:59.639 real 0m58.824s 00:20:59.639 user 3m46.475s 00:20:59.639 sys 0m19.427s 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.639 14:26:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:59.639 ************************************ 00:20:59.639 END TEST nvmf_dif 00:20:59.639 ************************************ 00:20:59.639 14:26:24 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:59.639 14:26:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:59.639 14:26:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.639 14:26:24 -- common/autotest_common.sh@10 -- # set +x 00:20:59.639 ************************************ 00:20:59.639 START TEST nvmf_abort_qd_sizes 00:20:59.639 ************************************ 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:59.639 * Looking for test storage... 00:20:59.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.639 --rc genhtml_branch_coverage=1 00:20:59.639 --rc genhtml_function_coverage=1 00:20:59.639 --rc genhtml_legend=1 00:20:59.639 --rc geninfo_all_blocks=1 00:20:59.639 --rc geninfo_unexecuted_blocks=1 00:20:59.639 00:20:59.639 ' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.639 --rc genhtml_branch_coverage=1 00:20:59.639 --rc genhtml_function_coverage=1 00:20:59.639 --rc genhtml_legend=1 00:20:59.639 --rc geninfo_all_blocks=1 00:20:59.639 --rc geninfo_unexecuted_blocks=1 00:20:59.639 00:20:59.639 ' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.639 --rc genhtml_branch_coverage=1 00:20:59.639 --rc genhtml_function_coverage=1 00:20:59.639 --rc genhtml_legend=1 00:20:59.639 --rc geninfo_all_blocks=1 00:20:59.639 --rc geninfo_unexecuted_blocks=1 00:20:59.639 00:20:59.639 ' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.639 --rc genhtml_branch_coverage=1 00:20:59.639 --rc genhtml_function_coverage=1 00:20:59.639 --rc genhtml_legend=1 00:20:59.639 --rc geninfo_all_blocks=1 00:20:59.639 --rc geninfo_unexecuted_blocks=1 00:20:59.639 00:20:59.639 ' 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:20:59.639 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.640 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:59.640 Cannot find device "nvmf_init_br" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:59.640 Cannot find device "nvmf_init_br2" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:59.640 Cannot find device "nvmf_tgt_br" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.640 Cannot find device "nvmf_tgt_br2" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:59.640 Cannot find device "nvmf_init_br" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:59.640 Cannot find device "nvmf_init_br2" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:59.640 Cannot find device "nvmf_tgt_br" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:59.640 Cannot find device "nvmf_tgt_br2" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:59.640 Cannot find device "nvmf_br" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:59.640 Cannot find device "nvmf_init_if" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:59.640 Cannot find device "nvmf_init_if2" 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:59.640 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:59.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:59.900 00:20:59.900 --- 10.0.0.3 ping statistics --- 00:20:59.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.900 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:59.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:59.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:20:59.900 00:20:59.900 --- 10.0.0.4 ping statistics --- 00:20:59.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.900 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:59.900 00:20:59.900 --- 10.0.0.1 ping statistics --- 00:20:59.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.900 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:59.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:59.900 00:20:59.900 --- 10.0.0.2 ping statistics --- 00:20:59.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.900 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:59.900 14:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:00.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:00.734 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84967 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84967 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84967 ']' 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.734 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.993 [2024-12-10 14:26:25.570295] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:00.993 [2024-12-10 14:26:25.570380] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.993 [2024-12-10 14:26:25.722647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.993 [2024-12-10 14:26:25.764970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.993 [2024-12-10 14:26:25.765035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.993 [2024-12-10 14:26:25.765050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.993 [2024-12-10 14:26:25.765061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.993 [2024-12-10 14:26:25.765070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.993 [2024-12-10 14:26:25.766032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.993 [2024-12-10 14:26:25.766178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.993 [2024-12-10 14:26:25.766315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.993 [2024-12-10 14:26:25.766316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.993 [2024-12-10 14:26:25.803062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:01.252 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.253 14:26:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 ************************************ 00:21:01.253 START TEST spdk_target_abort 00:21:01.253 ************************************ 00:21:01.253 14:26:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:01.253 14:26:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:01.253 14:26:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:01.253 14:26:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.253 14:26:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 spdk_targetn1 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 [2024-12-10 14:26:26.026680] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.253 [2024-12-10 14:26:26.063388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:01.253 14:26:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:04.541 Initializing NVMe Controllers 00:21:04.541 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:04.541 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:04.541 Initialization complete. Launching workers. 00:21:04.541 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9936, failed: 0 00:21:04.541 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1099, failed to submit 8837 00:21:04.541 success 844, unsuccessful 255, failed 0 00:21:04.541 14:26:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:04.541 14:26:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:08.732 Initializing NVMe Controllers 00:21:08.732 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:08.732 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:08.732 Initialization complete. Launching workers. 00:21:08.732 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:21:08.732 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1177, failed to submit 7823 00:21:08.732 success 407, unsuccessful 770, failed 0 00:21:08.732 14:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:08.732 14:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:11.266 Initializing NVMe Controllers 00:21:11.266 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:11.266 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:11.266 Initialization complete. Launching workers. 00:21:11.266 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31231, failed: 0 00:21:11.266 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2348, failed to submit 28883 00:21:11.266 success 486, unsuccessful 1862, failed 0 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.266 14:26:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84967 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84967 ']' 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84967 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84967 00:21:11.834 killing process with pid 84967 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84967' 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84967 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84967 00:21:11.834 ************************************ 00:21:11.834 END TEST spdk_target_abort 00:21:11.834 ************************************ 00:21:11.834 00:21:11.834 real 0m10.694s 00:21:11.834 user 0m41.066s 00:21:11.834 sys 0m2.052s 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.834 14:26:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:12.093 14:26:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:12.093 14:26:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:12.093 14:26:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.093 14:26:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:12.093 ************************************ 00:21:12.093 START TEST kernel_target_abort 00:21:12.093 ************************************ 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:12.093 14:26:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:12.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.352 Waiting for block devices as requested 00:21:12.352 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:12.611 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:12.611 No valid GPT data, bailing 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:12.611 No valid GPT data, bailing 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:12.611 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:12.611 No valid GPT data, bailing 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:12.871 No valid GPT data, bailing 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 --hostid=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 -a 10.0.0.1 -t tcp -s 4420 00:21:12.871 00:21:12.871 Discovery Log Number of Records 2, Generation counter 2 00:21:12.871 =====Discovery Log Entry 0====== 00:21:12.871 trtype: tcp 00:21:12.871 adrfam: ipv4 00:21:12.871 subtype: current discovery subsystem 00:21:12.871 treq: not specified, sq flow control disable supported 00:21:12.871 portid: 1 00:21:12.871 trsvcid: 4420 00:21:12.871 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:12.871 traddr: 10.0.0.1 00:21:12.871 eflags: none 00:21:12.871 sectype: none 00:21:12.871 =====Discovery Log Entry 1====== 00:21:12.871 trtype: tcp 00:21:12.871 adrfam: ipv4 00:21:12.871 subtype: nvme subsystem 00:21:12.871 treq: not specified, sq flow control disable supported 00:21:12.871 portid: 1 00:21:12.871 trsvcid: 4420 00:21:12.871 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:12.871 traddr: 10.0.0.1 00:21:12.871 eflags: none 00:21:12.871 sectype: none 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:12.871 14:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.161 Initializing NVMe Controllers 00:21:16.161 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:16.161 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:16.161 Initialization complete. Launching workers. 00:21:16.161 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32267, failed: 0 00:21:16.161 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32267, failed to submit 0 00:21:16.161 success 0, unsuccessful 32267, failed 0 00:21:16.161 14:26:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:16.161 14:26:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.448 Initializing NVMe Controllers 00:21:19.448 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:19.448 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:19.448 Initialization complete. Launching workers. 00:21:19.448 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63395, failed: 0 00:21:19.448 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25045, failed to submit 38350 00:21:19.448 success 0, unsuccessful 25045, failed 0 00:21:19.448 14:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:19.448 14:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.737 Initializing NVMe Controllers 00:21:22.737 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.737 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.737 Initialization complete. Launching workers. 00:21:22.737 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68029, failed: 0 00:21:22.737 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16974, failed to submit 51055 00:21:22.737 success 0, unsuccessful 16974, failed 0 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:22.737 14:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:22.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.406 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.406 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.665 00:21:24.665 real 0m12.566s 00:21:24.665 user 0m5.495s 00:21:24.665 sys 0m4.422s 00:21:24.665 14:26:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.665 ************************************ 00:21:24.665 14:26:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.665 END TEST kernel_target_abort 00:21:24.665 ************************************ 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.665 rmmod nvme_tcp 00:21:24.665 rmmod nvme_fabrics 00:21:24.665 rmmod nvme_keyring 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84967 ']' 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84967 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84967 ']' 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84967 00:21:24.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84967) - No such process 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84967 is not found' 00:21:24.665 Process with pid 84967 is not found 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:24.665 14:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:25.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.233 Waiting for block devices as requested 00:21:25.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:25.233 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:25.492 00:21:25.492 real 0m26.245s 00:21:25.492 user 0m47.731s 00:21:25.492 sys 0m7.911s 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.492 14:26:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.492 ************************************ 00:21:25.492 END TEST nvmf_abort_qd_sizes 00:21:25.492 ************************************ 00:21:25.492 14:26:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:25.492 14:26:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.492 14:26:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.492 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:25.492 ************************************ 00:21:25.492 START TEST keyring_file 00:21:25.492 ************************************ 00:21:25.492 14:26:50 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:25.751 * Looking for test storage... 00:21:25.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:25.751 14:26:50 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:25.751 14:26:50 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.751 14:26:50 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:21:25.751 14:26:50 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.751 14:26:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:25.752 14:26:50 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.752 14:26:50 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.752 --rc genhtml_branch_coverage=1 00:21:25.752 --rc genhtml_function_coverage=1 00:21:25.752 --rc genhtml_legend=1 00:21:25.752 --rc geninfo_all_blocks=1 00:21:25.752 --rc geninfo_unexecuted_blocks=1 00:21:25.752 00:21:25.752 ' 00:21:25.752 14:26:50 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.752 --rc genhtml_branch_coverage=1 00:21:25.752 --rc genhtml_function_coverage=1 00:21:25.752 --rc genhtml_legend=1 00:21:25.752 --rc geninfo_all_blocks=1 00:21:25.752 --rc geninfo_unexecuted_blocks=1 00:21:25.752 00:21:25.752 ' 00:21:25.752 14:26:50 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.752 --rc genhtml_branch_coverage=1 00:21:25.752 --rc genhtml_function_coverage=1 00:21:25.752 --rc genhtml_legend=1 00:21:25.752 --rc geninfo_all_blocks=1 00:21:25.752 --rc geninfo_unexecuted_blocks=1 00:21:25.752 00:21:25.752 ' 00:21:25.752 14:26:50 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.752 --rc genhtml_branch_coverage=1 00:21:25.752 --rc genhtml_function_coverage=1 00:21:25.752 --rc genhtml_legend=1 00:21:25.752 --rc geninfo_all_blocks=1 00:21:25.752 --rc geninfo_unexecuted_blocks=1 00:21:25.752 00:21:25.752 ' 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.752 14:26:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.752 14:26:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.752 14:26:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.752 14:26:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.752 14:26:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.752 14:26:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.752 14:26:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.752 14:26:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:25.752 14:26:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PcEgiHpWxP 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:25.752 14:26:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PcEgiHpWxP 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PcEgiHpWxP 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PcEgiHpWxP 00:21:25.752 14:26:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:25.752 14:26:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:26.011 14:26:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:26.011 14:26:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BIVWQeTqCR 00:21:26.011 14:26:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:26.011 14:26:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:26.011 14:26:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BIVWQeTqCR 00:21:26.011 14:26:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BIVWQeTqCR 00:21:26.011 14:26:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BIVWQeTqCR 00:21:26.011 14:26:50 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.011 14:26:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=85872 00:21:26.012 14:26:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85872 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85872 ']' 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.012 14:26:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.012 [2024-12-10 14:26:50.711861] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:26.012 [2024-12-10 14:26:50.711979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85872 ] 00:21:26.272 [2024-12-10 14:26:50.863874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.272 [2024-12-10 14:26:50.904238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.272 [2024-12-10 14:26:50.954187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:26.272 14:26:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.272 14:26:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:26.272 14:26:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:26.272 14:26:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.272 14:26:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.531 [2024-12-10 14:26:51.108661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.531 null0 00:21:26.531 [2024-12-10 14:26:51.140631] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.531 [2024-12-10 14:26:51.140903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.531 14:26:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.531 [2024-12-10 14:26:51.172626] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:26.531 request: 00:21:26.531 { 00:21:26.531 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.531 "secure_channel": false, 00:21:26.531 "listen_address": { 00:21:26.531 "trtype": "tcp", 00:21:26.531 "traddr": "127.0.0.1", 00:21:26.531 "trsvcid": "4420" 00:21:26.531 }, 00:21:26.531 "method": "nvmf_subsystem_add_listener", 00:21:26.531 "req_id": 1 00:21:26.531 } 00:21:26.531 Got JSON-RPC error response 00:21:26.531 response: 00:21:26.531 { 00:21:26.531 "code": -32602, 00:21:26.531 "message": "Invalid parameters" 00:21:26.531 } 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.531 14:26:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=85882 00:21:26.531 14:26:51 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:26.531 14:26:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85882 /var/tmp/bperf.sock 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85882 ']' 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.531 14:26:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.531 [2024-12-10 14:26:51.236568] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:26.531 [2024-12-10 14:26:51.236660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85882 ] 00:21:26.790 [2024-12-10 14:26:51.382763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.790 [2024-12-10 14:26:51.411442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.790 [2024-12-10 14:26:51.439354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:26.790 14:26:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.790 14:26:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:26.790 14:26:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:26.790 14:26:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:27.049 14:26:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BIVWQeTqCR 00:21:27.049 14:26:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BIVWQeTqCR 00:21:27.307 14:26:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:27.307 14:26:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:27.307 14:26:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:27.307 14:26:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.307 14:26:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:27.565 14:26:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PcEgiHpWxP == \/\t\m\p\/\t\m\p\.\P\c\E\g\i\H\p\W\x\P ]] 00:21:27.565 14:26:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:27.565 14:26:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:27.565 14:26:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:27.565 14:26:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.565 14:26:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:27.823 14:26:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BIVWQeTqCR == \/\t\m\p\/\t\m\p\.\B\I\V\W\Q\e\T\q\C\R ]] 00:21:27.823 14:26:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:27.823 14:26:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:27.823 14:26:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:27.823 14:26:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:27.823 14:26:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.823 14:26:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.081 14:26:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:28.081 14:26:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:28.081 14:26:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:28.081 14:26:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.081 14:26:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:28.081 14:26:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.081 14:26:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.340 14:26:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:28.340 14:26:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:28.340 14:26:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:28.598 [2024-12-10 14:26:53.281683] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.598 nvme0n1 00:21:28.598 14:26:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:28.598 14:26:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:28.598 14:26:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.598 14:26:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.598 14:26:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.598 14:26:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.856 14:26:53 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:28.856 14:26:53 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:28.856 14:26:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:28.856 14:26:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.856 14:26:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:28.856 14:26:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.856 14:26:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.422 14:26:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:29.422 14:26:53 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:29.422 Running I/O for 1 seconds... 00:21:30.359 13701.00 IOPS, 53.52 MiB/s 00:21:30.359 Latency(us) 00:21:30.359 [2024-12-10T14:26:55.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.359 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:30.359 nvme0n1 : 1.01 13745.66 53.69 0.00 0.00 9288.20 4140.68 15192.44 00:21:30.359 [2024-12-10T14:26:55.196Z] =================================================================================================================== 00:21:30.359 [2024-12-10T14:26:55.196Z] Total : 13745.66 53.69 0.00 0.00 9288.20 4140.68 15192.44 00:21:30.359 { 00:21:30.359 "results": [ 00:21:30.359 { 00:21:30.359 "job": "nvme0n1", 00:21:30.359 "core_mask": "0x2", 00:21:30.359 "workload": "randrw", 00:21:30.359 "percentage": 50, 00:21:30.359 "status": "finished", 00:21:30.359 "queue_depth": 128, 00:21:30.359 "io_size": 4096, 00:21:30.359 "runtime": 1.006136, 00:21:30.359 "iops": 13745.656650790748, 00:21:30.359 "mibps": 53.69397129215136, 00:21:30.359 "io_failed": 0, 00:21:30.359 "io_timeout": 0, 00:21:30.359 "avg_latency_us": 9288.196543482549, 00:21:30.359 "min_latency_us": 4140.683636363637, 00:21:30.359 "max_latency_us": 15192.436363636363 00:21:30.359 } 00:21:30.359 ], 00:21:30.359 "core_count": 1 00:21:30.359 } 00:21:30.359 14:26:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:30.359 14:26:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:30.617 14:26:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:30.617 14:26:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.617 14:26:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:30.617 14:26:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.617 14:26:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.617 14:26:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:30.875 14:26:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:30.875 14:26:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:30.875 14:26:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.875 14:26:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:30.875 14:26:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.875 14:26:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.875 14:26:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:31.133 14:26:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:31.133 14:26:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.133 14:26:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:31.133 14:26:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:31.392 [2024-12-10 14:26:56.134758] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:31.392 [2024-12-10 14:26:56.135651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d65d0 (107): Transport endpoint is not connected 00:21:31.392 [2024-12-10 14:26:56.136641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d65d0 (9): Bad file descriptor 00:21:31.392 [2024-12-10 14:26:56.137638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:31.392 [2024-12-10 14:26:56.137672] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:31.392 [2024-12-10 14:26:56.137697] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:31.392 [2024-12-10 14:26:56.137707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:31.392 request: 00:21:31.392 { 00:21:31.392 "name": "nvme0", 00:21:31.392 "trtype": "tcp", 00:21:31.392 "traddr": "127.0.0.1", 00:21:31.392 "adrfam": "ipv4", 00:21:31.392 "trsvcid": "4420", 00:21:31.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:31.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:31.392 "prchk_reftag": false, 00:21:31.392 "prchk_guard": false, 00:21:31.392 "hdgst": false, 00:21:31.392 "ddgst": false, 00:21:31.392 "psk": "key1", 00:21:31.392 "allow_unrecognized_csi": false, 00:21:31.392 "method": "bdev_nvme_attach_controller", 00:21:31.392 "req_id": 1 00:21:31.392 } 00:21:31.392 Got JSON-RPC error response 00:21:31.392 response: 00:21:31.392 { 00:21:31.392 "code": -5, 00:21:31.392 "message": "Input/output error" 00:21:31.392 } 00:21:31.392 14:26:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:31.392 14:26:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.392 14:26:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.392 14:26:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.392 14:26:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:31.392 14:26:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:31.392 14:26:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.392 14:26:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.392 14:26:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.392 14:26:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:31.651 14:26:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:31.651 14:26:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:31.651 14:26:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:31.651 14:26:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.651 14:26:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.651 14:26:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:31.651 14:26:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.909 14:26:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:31.909 14:26:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:31.909 14:26:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:32.167 14:26:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:32.167 14:26:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:32.426 14:26:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:32.426 14:26:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:32.426 14:26:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.685 14:26:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:32.685 14:26:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.PcEgiHpWxP 00:21:32.685 14:26:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.685 14:26:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:32.685 14:26:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:32.944 [2024-12-10 14:26:57.727045] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PcEgiHpWxP': 0100660 00:21:32.944 [2024-12-10 14:26:57.727096] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:32.944 request: 00:21:32.944 { 00:21:32.944 "name": "key0", 00:21:32.944 "path": "/tmp/tmp.PcEgiHpWxP", 00:21:32.944 "method": "keyring_file_add_key", 00:21:32.944 "req_id": 1 00:21:32.944 } 00:21:32.944 Got JSON-RPC error response 00:21:32.944 response: 00:21:32.944 { 00:21:32.944 "code": -1, 00:21:32.944 "message": "Operation not permitted" 00:21:32.944 } 00:21:32.944 14:26:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:32.944 14:26:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.944 14:26:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.944 14:26:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.944 14:26:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.PcEgiHpWxP 00:21:32.944 14:26:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:32.944 14:26:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PcEgiHpWxP 00:21:33.203 14:26:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.PcEgiHpWxP 00:21:33.203 14:26:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:33.203 14:26:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:33.203 14:26:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.203 14:26:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.203 14:26:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.203 14:26:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.462 14:26:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:33.462 14:26:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.462 14:26:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:33.462 14:26:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:33.722 [2024-12-10 14:26:58.411156] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PcEgiHpWxP': No such file or directory 00:21:33.722 [2024-12-10 14:26:58.411210] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:33.722 [2024-12-10 14:26:58.411246] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:33.722 [2024-12-10 14:26:58.411254] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:33.722 [2024-12-10 14:26:58.411262] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:33.722 [2024-12-10 14:26:58.411270] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:33.722 request: 00:21:33.722 { 00:21:33.722 "name": "nvme0", 00:21:33.722 "trtype": "tcp", 00:21:33.722 "traddr": "127.0.0.1", 00:21:33.722 "adrfam": "ipv4", 00:21:33.722 "trsvcid": "4420", 00:21:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:33.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:33.722 "prchk_reftag": false, 00:21:33.722 "prchk_guard": false, 00:21:33.722 "hdgst": false, 00:21:33.722 "ddgst": false, 00:21:33.722 "psk": "key0", 00:21:33.722 "allow_unrecognized_csi": false, 00:21:33.722 "method": "bdev_nvme_attach_controller", 00:21:33.722 "req_id": 1 00:21:33.722 } 00:21:33.722 Got JSON-RPC error response 00:21:33.722 response: 00:21:33.722 { 00:21:33.722 "code": -19, 00:21:33.722 "message": "No such device" 00:21:33.722 } 00:21:33.722 14:26:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:33.722 14:26:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.722 14:26:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.722 14:26:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.722 14:26:58 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:33.722 14:26:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:33.981 14:26:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3nSrZWt1fN 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:33.981 14:26:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3nSrZWt1fN 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3nSrZWt1fN 00:21:33.981 14:26:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3nSrZWt1fN 00:21:33.981 14:26:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3nSrZWt1fN 00:21:33.981 14:26:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3nSrZWt1fN 00:21:34.240 14:26:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:34.240 14:26:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:34.499 nvme0n1 00:21:34.758 14:26:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.758 14:26:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:34.758 14:26:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:34.758 14:26:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:35.017 14:26:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:35.017 14:26:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:35.017 14:26:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.017 14:26:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.017 14:26:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.276 14:27:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:35.276 14:27:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:35.276 14:27:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.276 14:27:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:35.276 14:27:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.276 14:27:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.276 14:27:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.534 14:27:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:35.534 14:27:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:35.534 14:27:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:35.793 14:27:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:35.793 14:27:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.793 14:27:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:36.051 14:27:00 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:36.051 14:27:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3nSrZWt1fN 00:21:36.051 14:27:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3nSrZWt1fN 00:21:36.310 14:27:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BIVWQeTqCR 00:21:36.310 14:27:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BIVWQeTqCR 00:21:36.569 14:27:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.569 14:27:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.827 nvme0n1 00:21:37.086 14:27:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:37.086 14:27:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:37.345 14:27:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:37.345 "subsystems": [ 00:21:37.345 { 00:21:37.345 "subsystem": "keyring", 00:21:37.345 "config": [ 00:21:37.345 { 00:21:37.345 "method": "keyring_file_add_key", 00:21:37.345 "params": { 00:21:37.345 "name": "key0", 00:21:37.345 "path": "/tmp/tmp.3nSrZWt1fN" 00:21:37.345 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "keyring_file_add_key", 00:21:37.346 "params": { 00:21:37.346 "name": "key1", 00:21:37.346 "path": "/tmp/tmp.BIVWQeTqCR" 00:21:37.346 } 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "iobuf", 00:21:37.346 "config": [ 00:21:37.346 { 00:21:37.346 "method": "iobuf_set_options", 00:21:37.346 "params": { 00:21:37.346 "small_pool_count": 8192, 00:21:37.346 "large_pool_count": 1024, 00:21:37.346 "small_bufsize": 8192, 00:21:37.346 "large_bufsize": 135168, 00:21:37.346 "enable_numa": false 00:21:37.346 } 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "sock", 00:21:37.346 "config": [ 00:21:37.346 { 00:21:37.346 "method": "sock_set_default_impl", 00:21:37.346 "params": { 00:21:37.346 "impl_name": "uring" 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "sock_impl_set_options", 00:21:37.346 "params": { 00:21:37.346 "impl_name": "ssl", 00:21:37.346 "recv_buf_size": 4096, 00:21:37.346 "send_buf_size": 4096, 00:21:37.346 "enable_recv_pipe": true, 00:21:37.346 "enable_quickack": false, 00:21:37.346 "enable_placement_id": 0, 00:21:37.346 "enable_zerocopy_send_server": true, 00:21:37.346 "enable_zerocopy_send_client": false, 00:21:37.346 "zerocopy_threshold": 0, 00:21:37.346 "tls_version": 0, 00:21:37.346 "enable_ktls": false 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "sock_impl_set_options", 00:21:37.346 "params": { 00:21:37.346 "impl_name": "posix", 00:21:37.346 "recv_buf_size": 2097152, 00:21:37.346 "send_buf_size": 2097152, 00:21:37.346 "enable_recv_pipe": true, 00:21:37.346 "enable_quickack": false, 00:21:37.346 "enable_placement_id": 0, 00:21:37.346 "enable_zerocopy_send_server": true, 00:21:37.346 "enable_zerocopy_send_client": false, 00:21:37.346 "zerocopy_threshold": 0, 00:21:37.346 "tls_version": 0, 00:21:37.346 "enable_ktls": false 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "sock_impl_set_options", 00:21:37.346 "params": { 00:21:37.346 "impl_name": "uring", 00:21:37.346 "recv_buf_size": 2097152, 00:21:37.346 "send_buf_size": 2097152, 00:21:37.346 "enable_recv_pipe": true, 00:21:37.346 "enable_quickack": false, 00:21:37.346 "enable_placement_id": 0, 00:21:37.346 "enable_zerocopy_send_server": false, 00:21:37.346 "enable_zerocopy_send_client": false, 00:21:37.346 "zerocopy_threshold": 0, 00:21:37.346 "tls_version": 0, 00:21:37.346 "enable_ktls": false 00:21:37.346 } 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "vmd", 00:21:37.346 "config": [] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "accel", 00:21:37.346 "config": [ 00:21:37.346 { 00:21:37.346 "method": "accel_set_options", 00:21:37.346 "params": { 00:21:37.346 "small_cache_size": 128, 00:21:37.346 "large_cache_size": 16, 00:21:37.346 "task_count": 2048, 00:21:37.346 "sequence_count": 2048, 00:21:37.346 "buf_count": 2048 00:21:37.346 } 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "bdev", 00:21:37.346 "config": [ 00:21:37.346 { 00:21:37.346 "method": "bdev_set_options", 00:21:37.346 "params": { 00:21:37.346 "bdev_io_pool_size": 65535, 00:21:37.346 "bdev_io_cache_size": 256, 00:21:37.346 "bdev_auto_examine": true, 00:21:37.346 "iobuf_small_cache_size": 128, 00:21:37.346 "iobuf_large_cache_size": 16 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_raid_set_options", 00:21:37.346 "params": { 00:21:37.346 "process_window_size_kb": 1024, 00:21:37.346 "process_max_bandwidth_mb_sec": 0 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_iscsi_set_options", 00:21:37.346 "params": { 00:21:37.346 "timeout_sec": 30 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_nvme_set_options", 00:21:37.346 "params": { 00:21:37.346 "action_on_timeout": "none", 00:21:37.346 "timeout_us": 0, 00:21:37.346 "timeout_admin_us": 0, 00:21:37.346 "keep_alive_timeout_ms": 10000, 00:21:37.346 "arbitration_burst": 0, 00:21:37.346 "low_priority_weight": 0, 00:21:37.346 "medium_priority_weight": 0, 00:21:37.346 "high_priority_weight": 0, 00:21:37.346 "nvme_adminq_poll_period_us": 10000, 00:21:37.346 "nvme_ioq_poll_period_us": 0, 00:21:37.346 "io_queue_requests": 512, 00:21:37.346 "delay_cmd_submit": true, 00:21:37.346 "transport_retry_count": 4, 00:21:37.346 "bdev_retry_count": 3, 00:21:37.346 "transport_ack_timeout": 0, 00:21:37.346 "ctrlr_loss_timeout_sec": 0, 00:21:37.346 "reconnect_delay_sec": 0, 00:21:37.346 "fast_io_fail_timeout_sec": 0, 00:21:37.346 "disable_auto_failback": false, 00:21:37.346 "generate_uuids": false, 00:21:37.346 "transport_tos": 0, 00:21:37.346 "nvme_error_stat": false, 00:21:37.346 "rdma_srq_size": 0, 00:21:37.346 "io_path_stat": false, 00:21:37.346 "allow_accel_sequence": false, 00:21:37.346 "rdma_max_cq_size": 0, 00:21:37.346 "rdma_cm_event_timeout_ms": 0, 00:21:37.346 "dhchap_digests": [ 00:21:37.346 "sha256", 00:21:37.346 "sha384", 00:21:37.346 "sha512" 00:21:37.346 ], 00:21:37.346 "dhchap_dhgroups": [ 00:21:37.346 "null", 00:21:37.346 "ffdhe2048", 00:21:37.346 "ffdhe3072", 00:21:37.346 "ffdhe4096", 00:21:37.346 "ffdhe6144", 00:21:37.346 "ffdhe8192" 00:21:37.346 ] 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_nvme_attach_controller", 00:21:37.346 "params": { 00:21:37.346 "name": "nvme0", 00:21:37.346 "trtype": "TCP", 00:21:37.346 "adrfam": "IPv4", 00:21:37.346 "traddr": "127.0.0.1", 00:21:37.346 "trsvcid": "4420", 00:21:37.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.346 "prchk_reftag": false, 00:21:37.346 "prchk_guard": false, 00:21:37.346 "ctrlr_loss_timeout_sec": 0, 00:21:37.346 "reconnect_delay_sec": 0, 00:21:37.346 "fast_io_fail_timeout_sec": 0, 00:21:37.346 "psk": "key0", 00:21:37.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:37.346 "hdgst": false, 00:21:37.346 "ddgst": false, 00:21:37.346 "multipath": "multipath" 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_nvme_set_hotplug", 00:21:37.346 "params": { 00:21:37.346 "period_us": 100000, 00:21:37.346 "enable": false 00:21:37.346 } 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "method": "bdev_wait_for_examine" 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }, 00:21:37.346 { 00:21:37.346 "subsystem": "nbd", 00:21:37.346 "config": [] 00:21:37.346 } 00:21:37.346 ] 00:21:37.346 }' 00:21:37.346 14:27:02 keyring_file -- keyring/file.sh@115 -- # killprocess 85882 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85882 ']' 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85882 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85882 00:21:37.346 killing process with pid 85882 00:21:37.346 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.346 00:21:37.346 Latency(us) 00:21:37.346 [2024-12-10T14:27:02.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.346 [2024-12-10T14:27:02.183Z] =================================================================================================================== 00:21:37.346 [2024-12-10T14:27:02.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.346 14:27:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.347 14:27:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.347 14:27:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85882' 00:21:37.347 14:27:02 keyring_file -- common/autotest_common.sh@973 -- # kill 85882 00:21:37.347 14:27:02 keyring_file -- common/autotest_common.sh@978 -- # wait 85882 00:21:37.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:37.606 14:27:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=86119 00:21:37.606 14:27:02 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:37.606 14:27:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86119 /var/tmp/bperf.sock 00:21:37.606 14:27:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86119 ']' 00:21:37.606 14:27:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:37.606 "subsystems": [ 00:21:37.606 { 00:21:37.606 "subsystem": "keyring", 00:21:37.606 "config": [ 00:21:37.606 { 00:21:37.606 "method": "keyring_file_add_key", 00:21:37.606 "params": { 00:21:37.606 "name": "key0", 00:21:37.606 "path": "/tmp/tmp.3nSrZWt1fN" 00:21:37.606 } 00:21:37.606 }, 00:21:37.606 { 00:21:37.606 "method": "keyring_file_add_key", 00:21:37.606 "params": { 00:21:37.606 "name": "key1", 00:21:37.606 "path": "/tmp/tmp.BIVWQeTqCR" 00:21:37.606 } 00:21:37.606 } 00:21:37.606 ] 00:21:37.606 }, 00:21:37.606 { 00:21:37.606 "subsystem": "iobuf", 00:21:37.606 "config": [ 00:21:37.606 { 00:21:37.606 "method": "iobuf_set_options", 00:21:37.606 "params": { 00:21:37.606 "small_pool_count": 8192, 00:21:37.606 "large_pool_count": 1024, 00:21:37.606 "small_bufsize": 8192, 00:21:37.606 "large_bufsize": 135168, 00:21:37.606 "enable_numa": false 00:21:37.606 } 00:21:37.606 } 00:21:37.606 ] 00:21:37.606 }, 00:21:37.606 { 00:21:37.606 "subsystem": "sock", 00:21:37.606 "config": [ 00:21:37.606 { 00:21:37.606 "method": "sock_set_default_impl", 00:21:37.606 "params": { 00:21:37.606 "impl_name": "uring" 00:21:37.606 } 00:21:37.606 }, 00:21:37.606 { 00:21:37.606 "method": "sock_impl_set_options", 00:21:37.606 "params": { 00:21:37.606 "impl_name": "ssl", 00:21:37.606 "recv_buf_size": 4096, 00:21:37.606 "send_buf_size": 4096, 00:21:37.606 "enable_recv_pipe": true, 00:21:37.606 "enable_quickack": false, 00:21:37.606 "enable_placement_id": 0, 00:21:37.606 "enable_zerocopy_send_server": true, 00:21:37.606 "enable_zerocopy_send_client": false, 00:21:37.606 "zerocopy_threshold": 0, 00:21:37.606 "tls_version": 0, 00:21:37.606 "enable_ktls": false 00:21:37.606 } 00:21:37.606 }, 00:21:37.606 { 00:21:37.606 "method": "sock_impl_set_options", 00:21:37.606 "params": { 00:21:37.607 "impl_name": "posix", 00:21:37.607 "recv_buf_size": 2097152, 00:21:37.607 "send_buf_size": 2097152, 00:21:37.607 "enable_recv_pipe": true, 00:21:37.607 "enable_quickack": false, 00:21:37.607 "enable_placement_id": 0, 00:21:37.607 "enable_zerocopy_send_server": true, 00:21:37.607 "enable_zerocopy_send_client": false, 00:21:37.607 "zerocopy_threshold": 0, 00:21:37.607 "tls_version": 0, 00:21:37.607 "enable_ktls": false 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "sock_impl_set_options", 00:21:37.607 "params": { 00:21:37.607 "impl_name": "uring", 00:21:37.607 "recv_buf_size": 2097152, 00:21:37.607 "send_buf_size": 2097152, 00:21:37.607 "enable_recv_pipe": true, 00:21:37.607 "enable_quickack": false, 00:21:37.607 "enable_placement_id": 0, 00:21:37.607 "enable_zerocopy_send_server": false, 00:21:37.607 "enable_zerocopy_send_client": false, 00:21:37.607 "zerocopy_threshold": 0, 00:21:37.607 "tls_version": 0, 00:21:37.607 "enable_ktls": false 00:21:37.607 } 00:21:37.607 } 00:21:37.607 ] 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "subsystem": "vmd", 00:21:37.607 "config": [] 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "subsystem": "accel", 00:21:37.607 "config": [ 00:21:37.607 { 00:21:37.607 "method": "accel_set_options", 00:21:37.607 "params": { 00:21:37.607 "small_cache_size": 128, 00:21:37.607 "large_cache_size": 16, 00:21:37.607 "task_count": 2048, 00:21:37.607 "sequence_count": 2048, 00:21:37.607 "buf_count": 2048 00:21:37.607 } 00:21:37.607 } 00:21:37.607 ] 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "subsystem": "bdev", 00:21:37.607 "config": [ 00:21:37.607 { 00:21:37.607 "method": "bdev_set_options", 00:21:37.607 "params": { 00:21:37.607 "bdev_io_pool_size": 65535, 00:21:37.607 "bdev_io_cache_size": 256, 00:21:37.607 "bdev_auto_examine": true, 00:21:37.607 "iobuf_small_cache_size": 128, 00:21:37.607 "iobuf_large_cache_size": 16 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_raid_set_options", 00:21:37.607 "params": { 00:21:37.607 "process_window_size_kb": 1024, 00:21:37.607 "process_max_bandwidth_mb_sec": 0 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_iscsi_set_options", 00:21:37.607 "params": { 00:21:37.607 "timeout_sec": 30 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_nvme_set_options", 00:21:37.607 "params": { 00:21:37.607 "action_on_timeout": "none", 00:21:37.607 "timeout_us": 0, 00:21:37.607 "timeout_admin_us": 0, 00:21:37.607 "keep_alive_timeout_ms": 10000, 00:21:37.607 "arbitration_burst": 0, 00:21:37.607 "low_priority_weight": 0, 00:21:37.607 "medium_priority_weight": 0, 00:21:37.607 "high_priority_weight": 0, 00:21:37.607 "nvme_adminq_poll_period_us": 10000, 00:21:37.607 "nvme_ioq_poll_period_us": 0, 00:21:37.607 "io_queue_requests": 512, 00:21:37.607 "delay_cmd_submit": true, 00:21:37.607 "transport_retry_count": 4, 00:21:37.607 "bdev_retry_count": 3, 00:21:37.607 "transport_ack_timeout": 0, 00:21:37.607 "ctrlr_loss_timeout_sec": 0, 00:21:37.607 "reconnect_delay_sec": 0, 00:21:37.607 "fast_io_fail_timeout_sec": 0, 00:21:37.607 "disable_auto_failback": false, 00:21:37.607 "generate_uuids": false, 00:21:37.607 "transport_tos": 0, 00:21:37.607 "nvme_error_stat": false, 00:21:37.607 "rdma_srq_size": 0, 00:21:37.607 "io_path_stat": false, 00:21:37.607 "allow_accel_sequence": false, 00:21:37.607 "rdma_max_cq_size": 0, 00:21:37.607 "rdma_cm_event_timeout_ms": 0, 00:21:37.607 "dhchap_digests": [ 00:21:37.607 "sha256", 00:21:37.607 "sha384", 00:21:37.607 "sha512" 00:21:37.607 ], 00:21:37.607 "dhchap_dhgroups": [ 00:21:37.607 "null", 00:21:37.607 "ffdhe2048", 00:21:37.607 "ffdhe3072", 00:21:37.607 "ffdhe4096", 00:21:37.607 "ffdhe6144", 00:21:37.607 "ffdhe8192" 00:21:37.607 ] 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_nvme_attach_controller", 00:21:37.607 "params": { 00:21:37.607 "name": "nvme0", 00:21:37.607 "trtype": "TCP", 00:21:37.607 "adrfam": "IPv4", 00:21:37.607 "traddr": "127.0.0.1", 00:21:37.607 "trsvcid": "4420", 00:21:37.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.607 "prchk_reftag": false, 00:21:37.607 "prchk_guard": false, 00:21:37.607 "ctrlr_loss_timeout_sec": 0, 00:21:37.607 "reconnect_delay_sec": 0, 00:21:37.607 "fast_io_fail_timeout_sec": 0, 00:21:37.607 "psk": "key0", 00:21:37.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:37.607 "hdgst": false, 00:21:37.607 "ddgst": false, 00:21:37.607 "multipath": "multipath" 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_nvme_set_hotplug", 00:21:37.607 "params": { 00:21:37.607 "period_us": 100000, 00:21:37.607 "enable": false 00:21:37.607 } 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "method": "bdev_wait_for_examine" 00:21:37.607 } 00:21:37.607 ] 00:21:37.607 }, 00:21:37.607 { 00:21:37.607 "subsystem": "nbd", 00:21:37.607 "config": [] 00:21:37.607 } 00:21:37.607 ] 00:21:37.607 }' 00:21:37.607 14:27:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:37.607 14:27:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.607 14:27:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:37.607 14:27:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.607 14:27:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:37.607 [2024-12-10 14:27:02.231015] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:37.607 [2024-12-10 14:27:02.231295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86119 ] 00:21:37.607 [2024-12-10 14:27:02.366898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.607 [2024-12-10 14:27:02.400153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.866 [2024-12-10 14:27:02.512558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:37.866 [2024-12-10 14:27:02.553736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.433 14:27:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.433 14:27:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:38.433 14:27:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:38.433 14:27:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:38.433 14:27:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.691 14:27:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:38.691 14:27:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:38.691 14:27:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.691 14:27:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:38.691 14:27:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.691 14:27:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.691 14:27:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.950 14:27:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:38.950 14:27:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:38.950 14:27:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:38.950 14:27:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.950 14:27:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:38.950 14:27:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.950 14:27:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:39.518 14:27:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3nSrZWt1fN /tmp/tmp.BIVWQeTqCR 00:21:39.518 14:27:04 keyring_file -- keyring/file.sh@20 -- # killprocess 86119 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86119 ']' 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86119 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86119 00:21:39.518 killing process with pid 86119 00:21:39.518 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.518 00:21:39.518 Latency(us) 00:21:39.518 [2024-12-10T14:27:04.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.518 [2024-12-10T14:27:04.355Z] =================================================================================================================== 00:21:39.518 [2024-12-10T14:27:04.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86119' 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@973 -- # kill 86119 00:21:39.518 14:27:04 keyring_file -- common/autotest_common.sh@978 -- # wait 86119 00:21:39.778 14:27:04 keyring_file -- keyring/file.sh@21 -- # killprocess 85872 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85872 ']' 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85872 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85872 00:21:39.778 killing process with pid 85872 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85872' 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@973 -- # kill 85872 00:21:39.778 14:27:04 keyring_file -- common/autotest_common.sh@978 -- # wait 85872 00:21:40.037 00:21:40.037 real 0m14.396s 00:21:40.037 user 0m37.231s 00:21:40.037 sys 0m2.654s 00:21:40.037 14:27:04 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.037 14:27:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:40.037 ************************************ 00:21:40.037 END TEST keyring_file 00:21:40.037 ************************************ 00:21:40.037 14:27:04 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:40.037 14:27:04 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:40.037 14:27:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.037 14:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.037 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:21:40.037 ************************************ 00:21:40.037 START TEST keyring_linux 00:21:40.037 ************************************ 00:21:40.037 14:27:04 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:40.037 Joined session keyring: 646481877 00:21:40.037 * Looking for test storage... 00:21:40.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:40.037 14:27:04 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:40.037 14:27:04 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:21:40.037 14:27:04 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.297 14:27:04 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.297 --rc genhtml_branch_coverage=1 00:21:40.297 --rc genhtml_function_coverage=1 00:21:40.297 --rc genhtml_legend=1 00:21:40.297 --rc geninfo_all_blocks=1 00:21:40.297 --rc geninfo_unexecuted_blocks=1 00:21:40.297 00:21:40.297 ' 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.297 --rc genhtml_branch_coverage=1 00:21:40.297 --rc genhtml_function_coverage=1 00:21:40.297 --rc genhtml_legend=1 00:21:40.297 --rc geninfo_all_blocks=1 00:21:40.297 --rc geninfo_unexecuted_blocks=1 00:21:40.297 00:21:40.297 ' 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.297 --rc genhtml_branch_coverage=1 00:21:40.297 --rc genhtml_function_coverage=1 00:21:40.297 --rc genhtml_legend=1 00:21:40.297 --rc geninfo_all_blocks=1 00:21:40.297 --rc geninfo_unexecuted_blocks=1 00:21:40.297 00:21:40.297 ' 00:21:40.297 14:27:04 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.298 --rc genhtml_branch_coverage=1 00:21:40.298 --rc genhtml_function_coverage=1 00:21:40.298 --rc genhtml_legend=1 00:21:40.298 --rc geninfo_all_blocks=1 00:21:40.298 --rc geninfo_unexecuted_blocks=1 00:21:40.298 00:21:40.298 ' 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=2e784d6e-8fde-4cf9-bd84-fddf1fdb5892 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.298 14:27:04 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.298 14:27:04 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.298 14:27:04 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.298 14:27:04 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.298 14:27:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.298 14:27:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.298 14:27:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.298 14:27:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:40.298 14:27:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.298 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:40.298 /tmp/:spdk-test:key0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:40.298 14:27:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:40.298 14:27:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:40.298 14:27:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:40.298 14:27:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:40.298 /tmp/:spdk-test:key1 00:21:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.298 14:27:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:40.298 14:27:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.298 14:27:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86246 00:21:40.298 14:27:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86246 00:21:40.298 14:27:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86246 ']' 00:21:40.298 14:27:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.298 14:27:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.299 14:27:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.299 14:27:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.299 14:27:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:40.299 [2024-12-10 14:27:05.087679] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:40.299 [2024-12-10 14:27:05.087934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86246 ] 00:21:40.557 [2024-12-10 14:27:05.225770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.557 [2024-12-10 14:27:05.253647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.557 [2024-12-10 14:27:05.290444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:41.494 14:27:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.494 14:27:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:41.494 14:27:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:41.494 14:27:05 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.494 14:27:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:41.494 [2024-12-10 14:27:05.990497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.494 null0 00:21:41.494 [2024-12-10 14:27:06.022481] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.494 [2024-12-10 14:27:06.022776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:41.494 224488885 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:41.494 803143958 00:21:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86260 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86260 /var/tmp/bperf.sock 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86260 ']' 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:41.494 [2024-12-10 14:27:06.104946] Starting SPDK v25.01-pre git sha1 e576aacaf / DPDK 24.03.0 initialization... 00:21:41.494 [2024-12-10 14:27:06.105240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86260 ] 00:21:41.494 [2024-12-10 14:27:06.251800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.494 [2024-12-10 14:27:06.281897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.494 14:27:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:41.494 14:27:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:41.494 14:27:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:41.753 14:27:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:41.753 14:27:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:42.012 [2024-12-10 14:27:06.844633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:42.271 14:27:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:42.271 14:27:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:42.530 [2024-12-10 14:27:07.115794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.530 nvme0n1 00:21:42.530 14:27:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:42.530 14:27:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:42.530 14:27:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:42.530 14:27:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:42.530 14:27:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.530 14:27:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:42.789 14:27:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:42.789 14:27:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:42.789 14:27:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:42.789 14:27:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:42.789 14:27:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:42.789 14:27:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.789 14:27:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@25 -- # sn=224488885 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 224488885 == \2\2\4\4\8\8\8\8\5 ]] 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 224488885 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:43.048 14:27:07 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:43.048 Running I/O for 1 seconds... 00:21:44.244 15029.00 IOPS, 58.71 MiB/s 00:21:44.244 Latency(us) 00:21:44.244 [2024-12-10T14:27:09.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.244 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:44.244 nvme0n1 : 1.01 15026.71 58.70 0.00 0.00 8476.61 2472.49 11081.54 00:21:44.244 [2024-12-10T14:27:09.081Z] =================================================================================================================== 00:21:44.244 [2024-12-10T14:27:09.081Z] Total : 15026.71 58.70 0.00 0.00 8476.61 2472.49 11081.54 00:21:44.244 { 00:21:44.244 "results": [ 00:21:44.244 { 00:21:44.244 "job": "nvme0n1", 00:21:44.244 "core_mask": "0x2", 00:21:44.244 "workload": "randread", 00:21:44.244 "status": "finished", 00:21:44.244 "queue_depth": 128, 00:21:44.244 "io_size": 4096, 00:21:44.244 "runtime": 1.008737, 00:21:44.244 "iops": 15026.71162057107, 00:21:44.244 "mibps": 58.69809226785574, 00:21:44.244 "io_failed": 0, 00:21:44.244 "io_timeout": 0, 00:21:44.244 "avg_latency_us": 8476.608350825847, 00:21:44.244 "min_latency_us": 2472.4945454545455, 00:21:44.244 "max_latency_us": 11081.541818181819 00:21:44.244 } 00:21:44.244 ], 00:21:44.244 "core_count": 1 00:21:44.244 } 00:21:44.244 14:27:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:44.244 14:27:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:44.503 14:27:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:44.503 14:27:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:44.503 14:27:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:44.503 14:27:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:44.503 14:27:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:44.503 14:27:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.761 14:27:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:44.761 14:27:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:44.761 14:27:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:44.761 14:27:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:44.761 14:27:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:44.761 14:27:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:44.762 14:27:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:44.762 14:27:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.762 14:27:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:44.762 14:27:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.762 14:27:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:44.762 14:27:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:45.020 [2024-12-10 14:27:09.686043] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:45.020 [2024-12-10 14:27:09.686922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe421d0 (107): Transport endpoint is not connected 00:21:45.020 [2024-12-10 14:27:09.687913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe421d0 (9): Bad file descriptor 00:21:45.020 [2024-12-10 14:27:09.688911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:45.020 [2024-12-10 14:27:09.688933] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:45.020 [2024-12-10 14:27:09.688959] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:45.020 [2024-12-10 14:27:09.688994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:45.020 request: 00:21:45.020 { 00:21:45.020 "name": "nvme0", 00:21:45.020 "trtype": "tcp", 00:21:45.020 "traddr": "127.0.0.1", 00:21:45.020 "adrfam": "ipv4", 00:21:45.020 "trsvcid": "4420", 00:21:45.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:45.020 "prchk_reftag": false, 00:21:45.020 "prchk_guard": false, 00:21:45.020 "hdgst": false, 00:21:45.020 "ddgst": false, 00:21:45.020 "psk": ":spdk-test:key1", 00:21:45.020 "allow_unrecognized_csi": false, 00:21:45.021 "method": "bdev_nvme_attach_controller", 00:21:45.021 "req_id": 1 00:21:45.021 } 00:21:45.021 Got JSON-RPC error response 00:21:45.021 response: 00:21:45.021 { 00:21:45.021 "code": -5, 00:21:45.021 "message": "Input/output error" 00:21:45.021 } 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@33 -- # sn=224488885 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 224488885 00:21:45.021 1 links removed 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@33 -- # sn=803143958 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 803143958 00:21:45.021 1 links removed 00:21:45.021 14:27:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86260 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86260 ']' 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86260 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86260 00:21:45.021 killing process with pid 86260 00:21:45.021 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.021 00:21:45.021 Latency(us) 00:21:45.021 [2024-12-10T14:27:09.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.021 [2024-12-10T14:27:09.858Z] =================================================================================================================== 00:21:45.021 [2024-12-10T14:27:09.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86260' 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 86260 00:21:45.021 14:27:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 86260 00:21:45.280 14:27:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86246 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86246 ']' 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86246 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86246 00:21:45.280 killing process with pid 86246 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86246' 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 86246 00:21:45.280 14:27:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 86246 00:21:45.539 ************************************ 00:21:45.539 END TEST keyring_linux 00:21:45.539 ************************************ 00:21:45.539 00:21:45.539 real 0m5.384s 00:21:45.539 user 0m10.534s 00:21:45.539 sys 0m1.374s 00:21:45.539 14:27:10 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.539 14:27:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:45.539 14:27:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:45.539 14:27:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:45.539 14:27:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:45.539 14:27:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:45.539 14:27:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:45.539 14:27:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:45.539 14:27:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:45.539 14:27:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.539 14:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:45.539 14:27:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:45.539 14:27:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:45.539 14:27:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:45.539 14:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:47.444 INFO: APP EXITING 00:21:47.444 INFO: killing all VMs 00:21:47.444 INFO: killing vhost app 00:21:47.444 INFO: EXIT DONE 00:21:48.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.012 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:48.012 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:48.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.838 Cleaning 00:21:48.838 Removing: /var/run/dpdk/spdk0/config 00:21:48.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:48.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:48.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:48.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:48.839 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:48.839 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:48.839 Removing: /var/run/dpdk/spdk1/config 00:21:48.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:48.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:48.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:48.839 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:48.839 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:48.839 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:48.839 Removing: /var/run/dpdk/spdk2/config 00:21:48.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:48.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:48.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:48.839 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:48.839 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:48.839 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:48.839 Removing: /var/run/dpdk/spdk3/config 00:21:48.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:48.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:48.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:48.839 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:48.839 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:48.839 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:48.839 Removing: /var/run/dpdk/spdk4/config 00:21:48.839 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:48.839 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:48.839 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:48.839 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:48.839 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:48.839 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:48.839 Removing: /dev/shm/nvmf_trace.0 00:21:48.839 Removing: /dev/shm/spdk_tgt_trace.pid57895 00:21:48.839 Removing: /var/run/dpdk/spdk0 00:21:48.839 Removing: /var/run/dpdk/spdk1 00:21:48.839 Removing: /var/run/dpdk/spdk2 00:21:48.839 Removing: /var/run/dpdk/spdk3 00:21:48.839 Removing: /var/run/dpdk/spdk4 00:21:48.839 Removing: /var/run/dpdk/spdk_pid57748 00:21:48.839 Removing: /var/run/dpdk/spdk_pid57895 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58088 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58175 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58189 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58299 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58309 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58443 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58639 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58793 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58865 00:21:48.839 Removing: /var/run/dpdk/spdk_pid58942 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59028 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59100 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59133 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59168 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59238 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59324 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59763 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59802 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59840 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59854 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59911 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59927 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59981 00:21:48.839 Removing: /var/run/dpdk/spdk_pid59997 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60043 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60053 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60093 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60098 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60229 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60264 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60347 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60673 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60685 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60716 00:21:48.839 Removing: /var/run/dpdk/spdk_pid60730 00:21:49.098 Removing: /var/run/dpdk/spdk_pid60745 00:21:49.098 Removing: /var/run/dpdk/spdk_pid60764 00:21:49.098 Removing: /var/run/dpdk/spdk_pid60778 00:21:49.098 Removing: /var/run/dpdk/spdk_pid60792 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60807 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60820 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60836 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60855 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60868 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60884 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60903 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60911 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60932 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60945 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60959 00:21:49.099 Removing: /var/run/dpdk/spdk_pid60974 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61005 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61018 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61048 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61120 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61143 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61152 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61181 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61185 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61198 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61235 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61248 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61277 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61281 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61296 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61300 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61309 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61319 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61323 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61338 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61361 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61386 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61397 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61420 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61435 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61437 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61477 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61489 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61510 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61523 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61525 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61538 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61540 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61552 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61555 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61557 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61639 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61681 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61792 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61829 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61873 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61888 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61905 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61920 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61951 00:21:49.099 Removing: /var/run/dpdk/spdk_pid61972 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62048 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62065 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62115 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62172 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62217 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62249 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62343 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62390 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62418 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62650 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62743 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62767 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62801 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62829 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62868 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62896 00:21:49.099 Removing: /var/run/dpdk/spdk_pid62933 00:21:49.099 Removing: /var/run/dpdk/spdk_pid63318 00:21:49.099 Removing: /var/run/dpdk/spdk_pid63358 00:21:49.099 Removing: /var/run/dpdk/spdk_pid63686 00:21:49.099 Removing: /var/run/dpdk/spdk_pid64146 00:21:49.099 Removing: /var/run/dpdk/spdk_pid64425 00:21:49.099 Removing: /var/run/dpdk/spdk_pid65266 00:21:49.099 Removing: /var/run/dpdk/spdk_pid66171 00:21:49.099 Removing: /var/run/dpdk/spdk_pid66284 00:21:49.099 Removing: /var/run/dpdk/spdk_pid66356 00:21:49.358 Removing: /var/run/dpdk/spdk_pid67744 00:21:49.358 Removing: /var/run/dpdk/spdk_pid68047 00:21:49.358 Removing: /var/run/dpdk/spdk_pid71760 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72116 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72225 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72348 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72369 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72390 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72411 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72502 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72639 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72782 00:21:49.358 Removing: /var/run/dpdk/spdk_pid72851 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73045 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73102 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73186 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73535 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73942 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73943 00:21:49.358 Removing: /var/run/dpdk/spdk_pid73944 00:21:49.358 Removing: /var/run/dpdk/spdk_pid74212 00:21:49.358 Removing: /var/run/dpdk/spdk_pid74471 00:21:49.358 Removing: /var/run/dpdk/spdk_pid74852 00:21:49.358 Removing: /var/run/dpdk/spdk_pid74854 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75172 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75192 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75206 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75238 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75247 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75589 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75638 00:21:49.358 Removing: /var/run/dpdk/spdk_pid75976 00:21:49.358 Removing: /var/run/dpdk/spdk_pid76167 00:21:49.358 Removing: /var/run/dpdk/spdk_pid76591 00:21:49.358 Removing: /var/run/dpdk/spdk_pid77154 00:21:49.358 Removing: /var/run/dpdk/spdk_pid78026 00:21:49.358 Removing: /var/run/dpdk/spdk_pid78658 00:21:49.358 Removing: /var/run/dpdk/spdk_pid78660 00:21:49.358 Removing: /var/run/dpdk/spdk_pid80690 00:21:49.358 Removing: /var/run/dpdk/spdk_pid80737 00:21:49.358 Removing: /var/run/dpdk/spdk_pid80790 00:21:49.358 Removing: /var/run/dpdk/spdk_pid80838 00:21:49.359 Removing: /var/run/dpdk/spdk_pid80946 00:21:49.359 Removing: /var/run/dpdk/spdk_pid80993 00:21:49.359 Removing: /var/run/dpdk/spdk_pid81046 00:21:49.359 Removing: /var/run/dpdk/spdk_pid81093 00:21:49.359 Removing: /var/run/dpdk/spdk_pid81449 00:21:49.359 Removing: /var/run/dpdk/spdk_pid82652 00:21:49.359 Removing: /var/run/dpdk/spdk_pid82798 00:21:49.359 Removing: /var/run/dpdk/spdk_pid83029 00:21:49.359 Removing: /var/run/dpdk/spdk_pid83619 00:21:49.359 Removing: /var/run/dpdk/spdk_pid83782 00:21:49.359 Removing: /var/run/dpdk/spdk_pid83944 00:21:49.359 Removing: /var/run/dpdk/spdk_pid84041 00:21:49.359 Removing: /var/run/dpdk/spdk_pid84199 00:21:49.359 Removing: /var/run/dpdk/spdk_pid84308 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85011 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85046 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85081 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85331 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85366 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85400 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85872 00:21:49.359 Removing: /var/run/dpdk/spdk_pid85882 00:21:49.359 Removing: /var/run/dpdk/spdk_pid86119 00:21:49.359 Removing: /var/run/dpdk/spdk_pid86246 00:21:49.359 Removing: /var/run/dpdk/spdk_pid86260 00:21:49.359 Clean 00:21:49.618 14:27:14 -- common/autotest_common.sh@1453 -- # return 0 00:21:49.618 14:27:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:49.618 14:27:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.618 14:27:14 -- common/autotest_common.sh@10 -- # set +x 00:21:49.618 14:27:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:49.618 14:27:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.618 14:27:14 -- common/autotest_common.sh@10 -- # set +x 00:21:49.618 14:27:14 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:49.618 14:27:14 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:49.618 14:27:14 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:49.618 14:27:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:49.618 14:27:14 -- spdk/autotest.sh@398 -- # hostname 00:21:49.618 14:27:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:49.877 geninfo: WARNING: invalid characters removed from testname! 00:22:11.880 14:27:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:15.168 14:27:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.703 14:27:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.238 14:27:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:22.142 14:27:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:25.430 14:27:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:27.336 14:27:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:27.336 14:27:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:27.336 14:27:52 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:27.336 14:27:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:27.336 14:27:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:27.336 14:27:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:27.336 + [[ -n 5262 ]] 00:22:27.336 + sudo kill 5262 00:22:27.345 [Pipeline] } 00:22:27.361 [Pipeline] // timeout 00:22:27.367 [Pipeline] } 00:22:27.381 [Pipeline] // stage 00:22:27.387 [Pipeline] } 00:22:27.402 [Pipeline] // catchError 00:22:27.411 [Pipeline] stage 00:22:27.414 [Pipeline] { (Stop VM) 00:22:27.426 [Pipeline] sh 00:22:27.707 + vagrant halt 00:22:30.240 ==> default: Halting domain... 00:22:36.832 [Pipeline] sh 00:22:37.168 + vagrant destroy -f 00:22:40.460 ==> default: Removing domain... 00:22:40.475 [Pipeline] sh 00:22:40.748 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:40.756 [Pipeline] } 00:22:40.769 [Pipeline] // stage 00:22:40.773 [Pipeline] } 00:22:40.786 [Pipeline] // dir 00:22:40.791 [Pipeline] } 00:22:40.804 [Pipeline] // wrap 00:22:40.809 [Pipeline] } 00:22:40.821 [Pipeline] // catchError 00:22:40.827 [Pipeline] stage 00:22:40.828 [Pipeline] { (Epilogue) 00:22:40.836 [Pipeline] sh 00:22:41.112 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:46.395 [Pipeline] catchError 00:22:46.397 [Pipeline] { 00:22:46.409 [Pipeline] sh 00:22:46.691 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:46.951 Artifacts sizes are good 00:22:46.960 [Pipeline] } 00:22:46.974 [Pipeline] // catchError 00:22:46.985 [Pipeline] archiveArtifacts 00:22:47.001 Archiving artifacts 00:22:47.123 [Pipeline] cleanWs 00:22:47.135 [WS-CLEANUP] Deleting project workspace... 00:22:47.135 [WS-CLEANUP] Deferred wipeout is used... 00:22:47.142 [WS-CLEANUP] done 00:22:47.144 [Pipeline] } 00:22:47.159 [Pipeline] // stage 00:22:47.164 [Pipeline] } 00:22:47.178 [Pipeline] // node 00:22:47.184 [Pipeline] End of Pipeline 00:22:47.228 Finished: SUCCESS